00:00:00.001 Started by upstream project "autotest-per-patch" build number 126179 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.037 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.038 The recommended git tool is: git 00:00:00.038 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.119 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.150 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.150 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.274 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.285 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.299 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.299 > git config core.sparsecheckout # timeout=10 00:00:03.312 > git read-tree -mu HEAD # timeout=10 00:00:03.331 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.364 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.364 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.484 [Pipeline] Start of Pipeline 00:00:03.500 [Pipeline] library 00:00:03.502 Loading library shm_lib@master 00:00:03.502 Library shm_lib@master is cached. Copying from home. 00:00:03.524 [Pipeline] node 00:00:03.534 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.538 [Pipeline] { 00:00:03.548 [Pipeline] catchError 00:00:03.550 [Pipeline] { 00:00:03.561 [Pipeline] wrap 00:00:03.570 [Pipeline] { 00:00:03.578 [Pipeline] stage 00:00:03.580 [Pipeline] { (Prologue) 00:00:03.761 [Pipeline] sh 00:00:04.045 + logger -p user.info -t JENKINS-CI 00:00:04.062 [Pipeline] echo 00:00:04.064 Node: CYP12 00:00:04.071 [Pipeline] sh 00:00:04.367 [Pipeline] setCustomBuildProperty 00:00:04.379 [Pipeline] echo 00:00:04.381 Cleanup processes 00:00:04.386 [Pipeline] sh 00:00:04.667 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.667 327322 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.679 [Pipeline] sh 00:00:04.959 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.960 ++ grep -v 'sudo pgrep' 00:00:04.960 ++ awk '{print $1}' 00:00:04.960 + sudo kill -9 00:00:04.960 + true 00:00:04.974 [Pipeline] cleanWs 00:00:04.983 [WS-CLEANUP] Deleting project workspace... 00:00:04.983 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.989 [WS-CLEANUP] done 00:00:04.996 [Pipeline] setCustomBuildProperty 00:00:05.015 [Pipeline] sh 00:00:05.294 + sudo git config --global --replace-all safe.directory '*' 00:00:05.359 [Pipeline] httpRequest 00:00:05.381 [Pipeline] echo 00:00:05.382 Sorcerer 10.211.164.101 is alive 00:00:05.391 [Pipeline] httpRequest 00:00:05.395 HttpMethod: GET 00:00:05.395 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.395 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.398 Response Code: HTTP/1.1 200 OK 00:00:05.399 Success: Status code 200 is in the accepted range: 200,404 00:00:05.399 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.213 [Pipeline] sh 00:00:06.494 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:06.510 [Pipeline] httpRequest 00:00:06.534 [Pipeline] echo 00:00:06.536 Sorcerer 10.211.164.101 is alive 00:00:06.544 [Pipeline] httpRequest 00:00:06.548 HttpMethod: GET 00:00:06.549 URL: http://10.211.164.101/packages/spdk_c6070605c600e0531699cfb0b8237ec47173be82.tar.gz 00:00:06.549 Sending request to url: http://10.211.164.101/packages/spdk_c6070605c600e0531699cfb0b8237ec47173be82.tar.gz 00:00:06.562 Response Code: HTTP/1.1 200 OK 00:00:06.563 Success: Status code 200 is in the accepted range: 200,404 00:00:06.563 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c6070605c600e0531699cfb0b8237ec47173be82.tar.gz 00:00:37.842 [Pipeline] sh 00:00:38.125 + tar --no-same-owner -xf spdk_c6070605c600e0531699cfb0b8237ec47173be82.tar.gz 00:00:40.678 [Pipeline] sh 00:00:40.961 + git -C spdk log --oneline -n5 00:00:40.961 c6070605c bdev/compress: remove the code about the config json 00:00:40.961 719d03c6a sock/uring: only register net impl if supported 00:00:40.961 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:40.961 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:40.961 6c7c1f57e accel: add sequence outstanding stat 00:00:40.971 [Pipeline] } 00:00:40.982 [Pipeline] // stage 00:00:40.989 [Pipeline] stage 00:00:40.991 [Pipeline] { (Prepare) 00:00:41.006 [Pipeline] writeFile 00:00:41.020 [Pipeline] sh 00:00:41.299 + logger -p user.info -t JENKINS-CI 00:00:41.312 [Pipeline] sh 00:00:41.634 + logger -p user.info -t JENKINS-CI 00:00:41.644 [Pipeline] sh 00:00:41.924 + cat autorun-spdk.conf 00:00:41.924 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.924 SPDK_TEST_NVMF=1 00:00:41.924 SPDK_TEST_NVME_CLI=1 00:00:41.924 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:41.924 SPDK_TEST_NVMF_NICS=e810 00:00:41.924 SPDK_TEST_VFIOUSER=1 00:00:41.924 SPDK_RUN_UBSAN=1 00:00:41.924 NET_TYPE=phy 00:00:41.932 RUN_NIGHTLY=0 00:00:41.938 [Pipeline] readFile 00:00:41.960 [Pipeline] withEnv 00:00:41.961 [Pipeline] { 00:00:41.972 [Pipeline] sh 00:00:42.255 + set -ex 00:00:42.255 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:42.255 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.255 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.255 ++ SPDK_TEST_NVMF=1 00:00:42.255 ++ SPDK_TEST_NVME_CLI=1 00:00:42.255 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.255 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.255 ++ SPDK_TEST_VFIOUSER=1 00:00:42.255 ++ SPDK_RUN_UBSAN=1 00:00:42.255 ++ NET_TYPE=phy 00:00:42.255 ++ RUN_NIGHTLY=0 00:00:42.255 + case $SPDK_TEST_NVMF_NICS in 00:00:42.255 + DRIVERS=ice 00:00:42.255 + [[ tcp == \r\d\m\a ]] 00:00:42.255 + [[ -n ice ]] 00:00:42.255 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:42.255 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:42.255 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:42.255 rmmod: ERROR: Module irdma is not currently loaded 00:00:42.255 rmmod: ERROR: Module i40iw is not currently loaded 00:00:42.255 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:42.255 + true 00:00:42.255 + for D in $DRIVERS 00:00:42.255 + sudo modprobe ice 00:00:42.255 + exit 0 00:00:42.263 [Pipeline] } 00:00:42.278 [Pipeline] // withEnv 00:00:42.283 [Pipeline] } 00:00:42.299 [Pipeline] // stage 00:00:42.310 [Pipeline] catchError 00:00:42.312 [Pipeline] { 00:00:42.328 [Pipeline] timeout 00:00:42.329 Timeout set to expire in 50 min 00:00:42.331 [Pipeline] { 00:00:42.346 [Pipeline] stage 00:00:42.348 [Pipeline] { (Tests) 00:00:42.367 [Pipeline] sh 00:00:42.648 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.648 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.648 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.648 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:42.648 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.648 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.648 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:42.648 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.648 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:42.648 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:42.648 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:42.648 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:42.648 + source /etc/os-release 00:00:42.648 ++ NAME='Fedora Linux' 00:00:42.648 ++ VERSION='38 (Cloud Edition)' 00:00:42.648 ++ ID=fedora 00:00:42.648 ++ VERSION_ID=38 00:00:42.648 ++ VERSION_CODENAME= 00:00:42.648 ++ PLATFORM_ID=platform:f38 00:00:42.648 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:42.648 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:42.648 ++ LOGO=fedora-logo-icon 00:00:42.648 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:42.648 ++ HOME_URL=https://fedoraproject.org/ 00:00:42.648 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:42.648 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:42.648 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:42.648 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:42.648 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:42.648 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:42.648 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:42.648 ++ SUPPORT_END=2024-05-14 00:00:42.648 ++ VARIANT='Cloud Edition' 00:00:42.648 ++ VARIANT_ID=cloud 00:00:42.648 + uname -a 00:00:42.648 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:42.648 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:45.939 Hugepages 00:00:45.940 node hugesize free / total 00:00:45.940 node0 1048576kB 0 / 0 00:00:45.940 node0 2048kB 0 / 0 00:00:45.940 node1 1048576kB 0 / 0 00:00:45.940 node1 2048kB 0 / 0 00:00:45.940 00:00:45.940 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:45.940 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:45.940 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:45.940 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:46.201 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:46.201 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:46.201 + rm -f /tmp/spdk-ld-path 00:00:46.201 + source autorun-spdk.conf 00:00:46.201 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.201 ++ SPDK_TEST_NVMF=1 00:00:46.201 ++ SPDK_TEST_NVME_CLI=1 00:00:46.201 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.201 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.201 ++ SPDK_TEST_VFIOUSER=1 00:00:46.201 ++ SPDK_RUN_UBSAN=1 00:00:46.201 ++ NET_TYPE=phy 00:00:46.201 ++ RUN_NIGHTLY=0 00:00:46.201 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.201 + [[ -n '' ]] 00:00:46.201 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.201 + for M in /var/spdk/build-*-manifest.txt 00:00:46.201 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.201 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.201 + for M in /var/spdk/build-*-manifest.txt 00:00:46.201 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.201 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.201 ++ uname 00:00:46.201 + [[ Linux == \L\i\n\u\x ]] 00:00:46.201 + sudo dmesg -T 00:00:46.201 + sudo dmesg --clear 00:00:46.201 + dmesg_pid=328416 00:00:46.201 + [[ Fedora Linux == FreeBSD ]] 00:00:46.201 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.201 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.201 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.201 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.201 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.201 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.201 + sudo dmesg -Tw 00:00:46.201 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.201 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.201 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.201 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.201 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.201 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.201 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.201 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.201 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.201 Test configuration: 00:00:46.201 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.202 SPDK_TEST_NVMF=1 00:00:46.202 SPDK_TEST_NVME_CLI=1 00:00:46.202 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.202 SPDK_TEST_NVMF_NICS=e810 00:00:46.202 SPDK_TEST_VFIOUSER=1 00:00:46.202 SPDK_RUN_UBSAN=1 00:00:46.202 NET_TYPE=phy 00:00:46.202 RUN_NIGHTLY=0 12:46:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:46.202 12:46:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.202 12:46:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.202 12:46:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.202 12:46:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.202 12:46:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.202 12:46:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.202 12:46:08 -- paths/export.sh@5 -- $ export PATH 00:00:46.202 12:46:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.202 12:46:08 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:46.202 12:46:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:46.202 12:46:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721040368.XXXXXX 00:00:46.202 12:46:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721040368.hAksgk 00:00:46.202 12:46:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:46.202 12:46:08 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:46.202 12:46:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:46.202 12:46:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.202 12:46:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.463 12:46:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:46.463 12:46:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:46.463 12:46:08 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.463 12:46:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:46.463 12:46:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:46.463 12:46:08 -- pm/common@17 -- $ local monitor 00:00:46.463 12:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.463 12:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.463 12:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.463 12:46:08 -- pm/common@21 -- $ date +%s 00:00:46.463 12:46:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.463 12:46:08 -- pm/common@25 -- $ sleep 1 00:00:46.463 12:46:08 -- pm/common@21 -- $ date +%s 00:00:46.463 12:46:08 -- pm/common@21 -- $ date +%s 00:00:46.463 12:46:08 -- pm/common@21 -- $ date +%s 00:00:46.463 12:46:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040368 00:00:46.463 12:46:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040368 00:00:46.463 12:46:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040368 00:00:46.463 12:46:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040368 00:00:46.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040368_collect-vmstat.pm.log 00:00:46.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040368_collect-cpu-load.pm.log 00:00:46.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040368_collect-cpu-temp.pm.log 00:00:46.463 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040368_collect-bmc-pm.bmc.pm.log 00:00:47.404 12:46:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:47.404 12:46:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:47.404 12:46:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:47.404 12:46:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.404 12:46:09 -- spdk/autobuild.sh@16 -- $ date -u 00:00:47.404 Mon Jul 15 10:46:09 AM UTC 2024 00:00:47.404 12:46:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:47.404 v24.09-pre-203-gc6070605c 00:00:47.404 12:46:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:47.404 12:46:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:47.404 12:46:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:47.404 12:46:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:47.404 12:46:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:47.404 12:46:09 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.404 ************************************ 00:00:47.404 START TEST ubsan 00:00:47.404 ************************************ 00:00:47.404 12:46:09 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:47.404 using ubsan 00:00:47.404 00:00:47.404 real 0m0.000s 00:00:47.404 user 0m0.000s 00:00:47.404 sys 0m0.000s 00:00:47.404 12:46:09 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:47.404 12:46:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:47.404 ************************************ 00:00:47.404 END TEST ubsan 00:00:47.404 ************************************ 00:00:47.404 12:46:09 -- common/autotest_common.sh@1142 -- $ return 0 00:00:47.404 12:46:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:47.404 12:46:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:47.404 12:46:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:47.404 12:46:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:47.664 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:47.664 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:47.923 Using 'verbs' RDMA provider 00:01:03.773 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:16.011 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:16.011 Creating mk/config.mk...done. 00:01:16.011 Creating mk/cc.flags.mk...done. 00:01:16.011 Type 'make' to build. 00:01:16.011 12:46:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:16.011 12:46:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:16.011 12:46:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.011 12:46:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.011 ************************************ 00:01:16.011 START TEST make 00:01:16.011 ************************************ 00:01:16.011 12:46:37 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:16.011 make[1]: Nothing to be done for 'all'. 00:01:17.390 The Meson build system 00:01:17.390 Version: 1.3.1 00:01:17.390 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:17.390 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.390 Build type: native build 00:01:17.390 Project name: libvfio-user 00:01:17.390 Project version: 0.0.1 00:01:17.390 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:17.390 C linker for the host machine: cc ld.bfd 2.39-16 00:01:17.390 Host machine cpu family: x86_64 00:01:17.390 Host machine cpu: x86_64 00:01:17.390 Run-time dependency threads found: YES 00:01:17.390 Library dl found: YES 00:01:17.390 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:17.390 Run-time dependency json-c found: YES 0.17 00:01:17.390 Run-time dependency cmocka found: YES 1.1.7 00:01:17.390 Program pytest-3 found: NO 00:01:17.390 Program flake8 found: NO 00:01:17.390 Program misspell-fixer found: NO 00:01:17.390 Program restructuredtext-lint found: NO 00:01:17.390 Program valgrind found: YES (/usr/bin/valgrind) 00:01:17.390 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:17.390 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:17.390 Compiler for C supports arguments -Wwrite-strings: YES 00:01:17.390 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:17.390 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:17.390 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:17.390 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:17.390 Build targets in project: 8 00:01:17.390 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:17.390 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:17.390 00:01:17.390 libvfio-user 0.0.1 00:01:17.390 00:01:17.390 User defined options 00:01:17.390 buildtype : debug 00:01:17.390 default_library: shared 00:01:17.390 libdir : /usr/local/lib 00:01:17.390 00:01:17.390 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.390 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.649 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.649 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.649 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.649 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.649 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.649 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.649 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.649 [8/37] Compiling C object samples/null.p/null.c.o 00:01:17.649 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.649 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.649 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.649 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.649 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.649 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.649 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.649 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.649 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.649 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.649 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.649 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.649 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.649 [22/37] Compiling C object samples/client.p/client.c.o 00:01:17.649 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.649 [24/37] Compiling C object samples/server.p/server.c.o 00:01:17.649 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.649 [26/37] Linking target samples/client 00:01:17.649 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.649 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:17.649 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.649 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:17.909 [31/37] Linking target test/unit_tests 00:01:17.909 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:17.909 [33/37] Linking target samples/server 00:01:17.909 [34/37] Linking target samples/null 00:01:17.909 [35/37] Linking target samples/gpio-pci-idio-16 00:01:17.909 [36/37] Linking target samples/lspci 00:01:17.909 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:17.909 INFO: autodetecting backend as ninja 00:01:17.909 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:17.909 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.481 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.481 ninja: no work to do. 00:01:25.069 The Meson build system 00:01:25.069 Version: 1.3.1 00:01:25.069 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:25.069 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:25.069 Build type: native build 00:01:25.069 Program cat found: YES (/usr/bin/cat) 00:01:25.069 Project name: DPDK 00:01:25.069 Project version: 24.03.0 00:01:25.069 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:25.069 C linker for the host machine: cc ld.bfd 2.39-16 00:01:25.069 Host machine cpu family: x86_64 00:01:25.069 Host machine cpu: x86_64 00:01:25.069 Message: ## Building in Developer Mode ## 00:01:25.069 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:25.069 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:25.069 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:25.069 Program python3 found: YES (/usr/bin/python3) 00:01:25.069 Program cat found: YES (/usr/bin/cat) 00:01:25.069 Compiler for C supports arguments -march=native: YES 00:01:25.069 Checking for size of "void *" : 8 00:01:25.069 Checking for size of "void *" : 8 (cached) 00:01:25.069 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:25.069 Library m found: YES 00:01:25.069 Library numa found: YES 00:01:25.069 Has header "numaif.h" : YES 00:01:25.069 Library fdt found: NO 00:01:25.069 Library execinfo found: NO 00:01:25.069 Has header "execinfo.h" : YES 00:01:25.069 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:25.069 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:25.069 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:25.069 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:25.069 Run-time dependency openssl found: YES 3.0.9 00:01:25.069 Run-time dependency libpcap found: YES 1.10.4 00:01:25.069 Has header "pcap.h" with dependency libpcap: YES 00:01:25.069 Compiler for C supports arguments -Wcast-qual: YES 00:01:25.069 Compiler for C supports arguments -Wdeprecated: YES 00:01:25.069 Compiler for C supports arguments -Wformat: YES 00:01:25.069 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:25.069 Compiler for C supports arguments -Wformat-security: NO 00:01:25.069 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.069 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:25.069 Compiler for C supports arguments -Wnested-externs: YES 00:01:25.069 Compiler for C supports arguments -Wold-style-definition: YES 00:01:25.069 Compiler for C supports arguments -Wpointer-arith: YES 00:01:25.069 Compiler for C supports arguments -Wsign-compare: YES 00:01:25.069 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:25.069 Compiler for C supports arguments -Wundef: YES 00:01:25.069 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.069 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:25.069 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:25.069 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.069 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:25.069 Program objdump found: YES (/usr/bin/objdump) 00:01:25.069 Compiler for C supports arguments -mavx512f: YES 00:01:25.069 Checking if "AVX512 checking" compiles: YES 00:01:25.069 Fetching value of define "__SSE4_2__" : 1 00:01:25.069 Fetching value of define "__AES__" : 1 00:01:25.069 Fetching value of define "__AVX__" : 1 00:01:25.069 Fetching value of define "__AVX2__" : 1 00:01:25.069 Fetching value of define "__AVX512BW__" : 1 00:01:25.069 Fetching value of define "__AVX512CD__" : 1 00:01:25.069 Fetching value of define "__AVX512DQ__" : 1 00:01:25.069 Fetching value of define "__AVX512F__" : 1 00:01:25.069 Fetching value of define "__AVX512VL__" : 1 00:01:25.069 Fetching value of define "__PCLMUL__" : 1 00:01:25.069 Fetching value of define "__RDRND__" : 1 00:01:25.070 Fetching value of define "__RDSEED__" : 1 00:01:25.070 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:25.070 Fetching value of define "__znver1__" : (undefined) 00:01:25.070 Fetching value of define "__znver2__" : (undefined) 00:01:25.070 Fetching value of define "__znver3__" : (undefined) 00:01:25.070 Fetching value of define "__znver4__" : (undefined) 00:01:25.070 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:25.070 Message: lib/log: Defining dependency "log" 00:01:25.070 Message: lib/kvargs: Defining dependency "kvargs" 00:01:25.070 Message: lib/telemetry: Defining dependency "telemetry" 00:01:25.070 Checking for function "getentropy" : NO 00:01:25.070 Message: lib/eal: Defining dependency "eal" 00:01:25.070 Message: lib/ring: Defining dependency "ring" 00:01:25.070 Message: lib/rcu: Defining dependency "rcu" 00:01:25.070 Message: lib/mempool: Defining dependency "mempool" 00:01:25.070 Message: lib/mbuf: Defining dependency "mbuf" 00:01:25.070 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:25.070 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:25.070 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:25.070 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:25.070 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:25.070 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:25.070 Compiler for C supports arguments -mpclmul: YES 00:01:25.070 Compiler for C supports arguments -maes: YES 00:01:25.070 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:25.070 Compiler for C supports arguments -mavx512bw: YES 00:01:25.070 Compiler for C supports arguments -mavx512dq: YES 00:01:25.070 Compiler for C supports arguments -mavx512vl: YES 00:01:25.070 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:25.070 Compiler for C supports arguments -mavx2: YES 00:01:25.070 Compiler for C supports arguments -mavx: YES 00:01:25.070 Message: lib/net: Defining dependency "net" 00:01:25.070 Message: lib/meter: Defining dependency "meter" 00:01:25.070 Message: lib/ethdev: Defining dependency "ethdev" 00:01:25.070 Message: lib/pci: Defining dependency "pci" 00:01:25.070 Message: lib/cmdline: Defining dependency "cmdline" 00:01:25.070 Message: lib/hash: Defining dependency "hash" 00:01:25.070 Message: lib/timer: Defining dependency "timer" 00:01:25.070 Message: lib/compressdev: Defining dependency "compressdev" 00:01:25.070 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:25.070 Message: lib/dmadev: Defining dependency "dmadev" 00:01:25.070 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:25.070 Message: lib/power: Defining dependency "power" 00:01:25.070 Message: lib/reorder: Defining dependency "reorder" 00:01:25.070 Message: lib/security: Defining dependency "security" 00:01:25.070 Has header "linux/userfaultfd.h" : YES 00:01:25.070 Has header "linux/vduse.h" : YES 00:01:25.070 Message: lib/vhost: Defining dependency "vhost" 00:01:25.070 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:25.070 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:25.070 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:25.070 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:25.070 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:25.070 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:25.070 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:25.070 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:25.070 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:25.070 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:25.070 Program doxygen found: YES (/usr/bin/doxygen) 00:01:25.070 Configuring doxy-api-html.conf using configuration 00:01:25.070 Configuring doxy-api-man.conf using configuration 00:01:25.070 Program mandb found: YES (/usr/bin/mandb) 00:01:25.070 Program sphinx-build found: NO 00:01:25.070 Configuring rte_build_config.h using configuration 00:01:25.070 Message: 00:01:25.070 ================= 00:01:25.070 Applications Enabled 00:01:25.070 ================= 00:01:25.070 00:01:25.070 apps: 00:01:25.070 00:01:25.070 00:01:25.070 Message: 00:01:25.070 ================= 00:01:25.070 Libraries Enabled 00:01:25.070 ================= 00:01:25.070 00:01:25.070 libs: 00:01:25.070 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:25.070 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:25.070 cryptodev, dmadev, power, reorder, security, vhost, 00:01:25.070 00:01:25.070 Message: 00:01:25.070 =============== 00:01:25.070 Drivers Enabled 00:01:25.070 =============== 00:01:25.070 00:01:25.070 common: 00:01:25.070 00:01:25.070 bus: 00:01:25.070 pci, vdev, 00:01:25.070 mempool: 00:01:25.070 ring, 00:01:25.070 dma: 00:01:25.070 00:01:25.070 net: 00:01:25.070 00:01:25.070 crypto: 00:01:25.070 00:01:25.070 compress: 00:01:25.070 00:01:25.070 vdpa: 00:01:25.070 00:01:25.070 00:01:25.070 Message: 00:01:25.070 ================= 00:01:25.070 Content Skipped 00:01:25.070 ================= 00:01:25.070 00:01:25.070 apps: 00:01:25.070 dumpcap: explicitly disabled via build config 00:01:25.070 graph: explicitly disabled via build config 00:01:25.070 pdump: explicitly disabled via build config 00:01:25.070 proc-info: explicitly disabled via build config 00:01:25.070 test-acl: explicitly disabled via build config 00:01:25.070 test-bbdev: explicitly disabled via build config 00:01:25.070 test-cmdline: explicitly disabled via build config 00:01:25.070 test-compress-perf: explicitly disabled via build config 00:01:25.070 test-crypto-perf: explicitly disabled via build config 00:01:25.070 test-dma-perf: explicitly disabled via build config 00:01:25.070 test-eventdev: explicitly disabled via build config 00:01:25.070 test-fib: explicitly disabled via build config 00:01:25.070 test-flow-perf: explicitly disabled via build config 00:01:25.070 test-gpudev: explicitly disabled via build config 00:01:25.070 test-mldev: explicitly disabled via build config 00:01:25.070 test-pipeline: explicitly disabled via build config 00:01:25.070 test-pmd: explicitly disabled via build config 00:01:25.070 test-regex: explicitly disabled via build config 00:01:25.070 test-sad: explicitly disabled via build config 00:01:25.070 test-security-perf: explicitly disabled via build config 00:01:25.070 00:01:25.070 libs: 00:01:25.070 argparse: explicitly disabled via build config 00:01:25.070 metrics: explicitly disabled via build config 00:01:25.070 acl: explicitly disabled via build config 00:01:25.070 bbdev: explicitly disabled via build config 00:01:25.070 bitratestats: explicitly disabled via build config 00:01:25.070 bpf: explicitly disabled via build config 00:01:25.070 cfgfile: explicitly disabled via build config 00:01:25.070 distributor: explicitly disabled via build config 00:01:25.070 efd: explicitly disabled via build config 00:01:25.070 eventdev: explicitly disabled via build config 00:01:25.070 dispatcher: explicitly disabled via build config 00:01:25.070 gpudev: explicitly disabled via build config 00:01:25.070 gro: explicitly disabled via build config 00:01:25.070 gso: explicitly disabled via build config 00:01:25.070 ip_frag: explicitly disabled via build config 00:01:25.070 jobstats: explicitly disabled via build config 00:01:25.070 latencystats: explicitly disabled via build config 00:01:25.070 lpm: explicitly disabled via build config 00:01:25.070 member: explicitly disabled via build config 00:01:25.070 pcapng: explicitly disabled via build config 00:01:25.070 rawdev: explicitly disabled via build config 00:01:25.070 regexdev: explicitly disabled via build config 00:01:25.070 mldev: explicitly disabled via build config 00:01:25.070 rib: explicitly disabled via build config 00:01:25.070 sched: explicitly disabled via build config 00:01:25.070 stack: explicitly disabled via build config 00:01:25.070 ipsec: explicitly disabled via build config 00:01:25.070 pdcp: explicitly disabled via build config 00:01:25.070 fib: explicitly disabled via build config 00:01:25.070 port: explicitly disabled via build config 00:01:25.070 pdump: explicitly disabled via build config 00:01:25.070 table: explicitly disabled via build config 00:01:25.070 pipeline: explicitly disabled via build config 00:01:25.070 graph: explicitly disabled via build config 00:01:25.070 node: explicitly disabled via build config 00:01:25.070 00:01:25.070 drivers: 00:01:25.070 common/cpt: not in enabled drivers build config 00:01:25.070 common/dpaax: not in enabled drivers build config 00:01:25.070 common/iavf: not in enabled drivers build config 00:01:25.070 common/idpf: not in enabled drivers build config 00:01:25.070 common/ionic: not in enabled drivers build config 00:01:25.070 common/mvep: not in enabled drivers build config 00:01:25.070 common/octeontx: not in enabled drivers build config 00:01:25.070 bus/auxiliary: not in enabled drivers build config 00:01:25.070 bus/cdx: not in enabled drivers build config 00:01:25.070 bus/dpaa: not in enabled drivers build config 00:01:25.070 bus/fslmc: not in enabled drivers build config 00:01:25.070 bus/ifpga: not in enabled drivers build config 00:01:25.070 bus/platform: not in enabled drivers build config 00:01:25.070 bus/uacce: not in enabled drivers build config 00:01:25.070 bus/vmbus: not in enabled drivers build config 00:01:25.070 common/cnxk: not in enabled drivers build config 00:01:25.070 common/mlx5: not in enabled drivers build config 00:01:25.070 common/nfp: not in enabled drivers build config 00:01:25.070 common/nitrox: not in enabled drivers build config 00:01:25.070 common/qat: not in enabled drivers build config 00:01:25.070 common/sfc_efx: not in enabled drivers build config 00:01:25.070 mempool/bucket: not in enabled drivers build config 00:01:25.070 mempool/cnxk: not in enabled drivers build config 00:01:25.070 mempool/dpaa: not in enabled drivers build config 00:01:25.070 mempool/dpaa2: not in enabled drivers build config 00:01:25.070 mempool/octeontx: not in enabled drivers build config 00:01:25.070 mempool/stack: not in enabled drivers build config 00:01:25.070 dma/cnxk: not in enabled drivers build config 00:01:25.070 dma/dpaa: not in enabled drivers build config 00:01:25.070 dma/dpaa2: not in enabled drivers build config 00:01:25.070 dma/hisilicon: not in enabled drivers build config 00:01:25.070 dma/idxd: not in enabled drivers build config 00:01:25.070 dma/ioat: not in enabled drivers build config 00:01:25.070 dma/skeleton: not in enabled drivers build config 00:01:25.070 net/af_packet: not in enabled drivers build config 00:01:25.070 net/af_xdp: not in enabled drivers build config 00:01:25.070 net/ark: not in enabled drivers build config 00:01:25.070 net/atlantic: not in enabled drivers build config 00:01:25.070 net/avp: not in enabled drivers build config 00:01:25.070 net/axgbe: not in enabled drivers build config 00:01:25.070 net/bnx2x: not in enabled drivers build config 00:01:25.070 net/bnxt: not in enabled drivers build config 00:01:25.070 net/bonding: not in enabled drivers build config 00:01:25.070 net/cnxk: not in enabled drivers build config 00:01:25.071 net/cpfl: not in enabled drivers build config 00:01:25.071 net/cxgbe: not in enabled drivers build config 00:01:25.071 net/dpaa: not in enabled drivers build config 00:01:25.071 net/dpaa2: not in enabled drivers build config 00:01:25.071 net/e1000: not in enabled drivers build config 00:01:25.071 net/ena: not in enabled drivers build config 00:01:25.071 net/enetc: not in enabled drivers build config 00:01:25.071 net/enetfec: not in enabled drivers build config 00:01:25.071 net/enic: not in enabled drivers build config 00:01:25.071 net/failsafe: not in enabled drivers build config 00:01:25.071 net/fm10k: not in enabled drivers build config 00:01:25.071 net/gve: not in enabled drivers build config 00:01:25.071 net/hinic: not in enabled drivers build config 00:01:25.071 net/hns3: not in enabled drivers build config 00:01:25.071 net/i40e: not in enabled drivers build config 00:01:25.071 net/iavf: not in enabled drivers build config 00:01:25.071 net/ice: not in enabled drivers build config 00:01:25.071 net/idpf: not in enabled drivers build config 00:01:25.071 net/igc: not in enabled drivers build config 00:01:25.071 net/ionic: not in enabled drivers build config 00:01:25.071 net/ipn3ke: not in enabled drivers build config 00:01:25.071 net/ixgbe: not in enabled drivers build config 00:01:25.071 net/mana: not in enabled drivers build config 00:01:25.071 net/memif: not in enabled drivers build config 00:01:25.071 net/mlx4: not in enabled drivers build config 00:01:25.071 net/mlx5: not in enabled drivers build config 00:01:25.071 net/mvneta: not in enabled drivers build config 00:01:25.071 net/mvpp2: not in enabled drivers build config 00:01:25.071 net/netvsc: not in enabled drivers build config 00:01:25.071 net/nfb: not in enabled drivers build config 00:01:25.071 net/nfp: not in enabled drivers build config 00:01:25.071 net/ngbe: not in enabled drivers build config 00:01:25.071 net/null: not in enabled drivers build config 00:01:25.071 net/octeontx: not in enabled drivers build config 00:01:25.071 net/octeon_ep: not in enabled drivers build config 00:01:25.071 net/pcap: not in enabled drivers build config 00:01:25.071 net/pfe: not in enabled drivers build config 00:01:25.071 net/qede: not in enabled drivers build config 00:01:25.071 net/ring: not in enabled drivers build config 00:01:25.071 net/sfc: not in enabled drivers build config 00:01:25.071 net/softnic: not in enabled drivers build config 00:01:25.071 net/tap: not in enabled drivers build config 00:01:25.071 net/thunderx: not in enabled drivers build config 00:01:25.071 net/txgbe: not in enabled drivers build config 00:01:25.071 net/vdev_netvsc: not in enabled drivers build config 00:01:25.071 net/vhost: not in enabled drivers build config 00:01:25.071 net/virtio: not in enabled drivers build config 00:01:25.071 net/vmxnet3: not in enabled drivers build config 00:01:25.071 raw/*: missing internal dependency, "rawdev" 00:01:25.071 crypto/armv8: not in enabled drivers build config 00:01:25.071 crypto/bcmfs: not in enabled drivers build config 00:01:25.071 crypto/caam_jr: not in enabled drivers build config 00:01:25.071 crypto/ccp: not in enabled drivers build config 00:01:25.071 crypto/cnxk: not in enabled drivers build config 00:01:25.071 crypto/dpaa_sec: not in enabled drivers build config 00:01:25.071 crypto/dpaa2_sec: not in enabled drivers build config 00:01:25.071 crypto/ipsec_mb: not in enabled drivers build config 00:01:25.071 crypto/mlx5: not in enabled drivers build config 00:01:25.071 crypto/mvsam: not in enabled drivers build config 00:01:25.071 crypto/nitrox: not in enabled drivers build config 00:01:25.071 crypto/null: not in enabled drivers build config 00:01:25.071 crypto/octeontx: not in enabled drivers build config 00:01:25.071 crypto/openssl: not in enabled drivers build config 00:01:25.071 crypto/scheduler: not in enabled drivers build config 00:01:25.071 crypto/uadk: not in enabled drivers build config 00:01:25.071 crypto/virtio: not in enabled drivers build config 00:01:25.071 compress/isal: not in enabled drivers build config 00:01:25.071 compress/mlx5: not in enabled drivers build config 00:01:25.071 compress/nitrox: not in enabled drivers build config 00:01:25.071 compress/octeontx: not in enabled drivers build config 00:01:25.071 compress/zlib: not in enabled drivers build config 00:01:25.071 regex/*: missing internal dependency, "regexdev" 00:01:25.071 ml/*: missing internal dependency, "mldev" 00:01:25.071 vdpa/ifc: not in enabled drivers build config 00:01:25.071 vdpa/mlx5: not in enabled drivers build config 00:01:25.071 vdpa/nfp: not in enabled drivers build config 00:01:25.071 vdpa/sfc: not in enabled drivers build config 00:01:25.071 event/*: missing internal dependency, "eventdev" 00:01:25.071 baseband/*: missing internal dependency, "bbdev" 00:01:25.071 gpu/*: missing internal dependency, "gpudev" 00:01:25.071 00:01:25.071 00:01:25.071 Build targets in project: 84 00:01:25.071 00:01:25.071 DPDK 24.03.0 00:01:25.071 00:01:25.071 User defined options 00:01:25.071 buildtype : debug 00:01:25.071 default_library : shared 00:01:25.071 libdir : lib 00:01:25.071 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.071 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:25.071 c_link_args : 00:01:25.071 cpu_instruction_set: native 00:01:25.071 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:25.071 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:25.071 enable_docs : false 00:01:25.071 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:25.071 enable_kmods : false 00:01:25.071 max_lcores : 128 00:01:25.071 tests : false 00:01:25.071 00:01:25.071 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.071 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:25.071 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:25.071 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:25.071 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:25.071 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:25.071 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:25.071 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:25.071 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:25.071 [8/267] Linking static target lib/librte_kvargs.a 00:01:25.071 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:25.071 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:25.071 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:25.071 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:25.071 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:25.071 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:25.071 [15/267] Linking static target lib/librte_log.a 00:01:25.071 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:25.071 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:25.071 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:25.071 [19/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:25.071 [20/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:25.071 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:25.071 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:25.071 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:25.071 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:25.071 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:25.071 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:25.071 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.071 [28/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:25.071 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:25.071 [30/267] Linking static target lib/librte_pci.a 00:01:25.071 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.071 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:25.071 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.071 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:25.071 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.071 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.331 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.331 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.331 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:25.331 [40/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.331 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.331 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:25.331 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:25.331 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:25.331 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:25.331 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:25.331 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:25.331 [48/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.331 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:25.331 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:25.331 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:25.331 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:25.331 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:25.331 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:25.331 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:25.331 [56/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.331 [57/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.331 [58/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:25.331 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:25.331 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:25.331 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:25.331 [62/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:25.331 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:25.331 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:25.331 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:25.591 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:25.591 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:25.591 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:25.591 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:25.591 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:25.591 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:25.591 [72/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:25.591 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.591 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:25.591 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:25.591 [76/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.591 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:25.591 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:25.591 [79/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:25.591 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:25.591 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:25.591 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:25.591 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.591 [84/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.591 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:25.591 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:25.591 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:25.591 [88/267] Linking static target lib/librte_telemetry.a 00:01:25.591 [89/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:25.591 [90/267] Linking static target lib/librte_meter.a 00:01:25.591 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.591 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:25.591 [93/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.591 [94/267] Linking static target lib/librte_ring.a 00:01:25.591 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:25.591 [96/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:25.591 [97/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.591 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:25.591 [99/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:25.591 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:25.591 [101/267] Linking static target lib/librte_timer.a 00:01:25.591 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:25.591 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.591 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.591 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:25.591 [106/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.591 [107/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:25.591 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:25.591 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.591 [110/267] Linking static target lib/librte_cmdline.a 00:01:25.591 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:25.591 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:25.591 [113/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.591 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.591 [115/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:25.591 [116/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:25.591 [117/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:25.591 [118/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.591 [119/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.591 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.591 [121/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:25.591 [122/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.591 [123/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.591 [124/267] Linking static target lib/librte_rcu.a 00:01:25.591 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:25.591 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:25.591 [127/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.591 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:25.591 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:25.591 [130/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:25.591 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.591 [132/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.591 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.591 [134/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:25.591 [135/267] Linking target lib/librte_log.so.24.1 00:01:25.591 [136/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:25.591 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:25.591 [138/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.591 [139/267] Linking static target lib/librte_compressdev.a 00:01:25.591 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.591 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.591 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:25.591 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.591 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.591 [145/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:25.591 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:25.591 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:25.591 [148/267] Linking static target lib/librte_reorder.a 00:01:25.591 [149/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.592 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.592 [151/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:25.592 [152/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:25.592 [153/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.592 [154/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:25.592 [155/267] Linking static target lib/librte_dmadev.a 00:01:25.592 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:25.592 [157/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:25.592 [158/267] Linking static target lib/librte_mempool.a 00:01:25.592 [159/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:25.592 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.592 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.592 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:25.592 [163/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.592 [164/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.592 [165/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:25.592 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.592 [167/267] Linking static target lib/librte_net.a 00:01:25.592 [168/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.592 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.592 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:25.592 [171/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:25.592 [172/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:25.592 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.853 [174/267] Linking static target lib/librte_eal.a 00:01:25.853 [175/267] Linking static target lib/librte_power.a 00:01:25.853 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:25.853 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.853 [178/267] Linking target lib/librte_kvargs.so.24.1 00:01:25.853 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:25.853 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.853 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.853 [182/267] Linking static target lib/librte_security.a 00:01:25.853 [183/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.853 [184/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:25.853 [185/267] Linking static target lib/librte_mbuf.a 00:01:25.853 [186/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.853 [187/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:25.853 [188/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:25.853 [189/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.853 [190/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.853 [191/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:25.853 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:25.853 [193/267] Linking static target drivers/librte_mempool_ring.a 00:01:25.853 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:25.853 [195/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.853 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.853 [197/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:25.853 [198/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:25.853 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:25.853 [200/267] Linking static target drivers/librte_bus_pci.a 00:01:25.853 [201/267] Linking static target drivers/librte_bus_vdev.a 00:01:25.853 [202/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.853 [203/267] Linking static target lib/librte_hash.a 00:01:25.853 [204/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.114 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.114 [206/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.114 [207/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.114 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:26.114 [209/267] Linking target lib/librte_telemetry.so.24.1 00:01:26.114 [210/267] Linking static target lib/librte_cryptodev.a 00:01:26.114 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.114 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:26.114 [213/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:26.114 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:26.375 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.375 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.375 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.375 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:26.375 [219/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.375 [220/267] Linking static target lib/librte_ethdev.a 00:01:26.636 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.636 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.636 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.636 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.897 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.897 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.469 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:27.469 [228/267] Linking static target lib/librte_vhost.a 00:01:28.413 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.796 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.381 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.325 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.325 [233/267] Linking target lib/librte_eal.so.24.1 00:01:37.586 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:37.586 [235/267] Linking target lib/librte_meter.so.24.1 00:01:37.586 [236/267] Linking target lib/librte_pci.so.24.1 00:01:37.586 [237/267] Linking target lib/librte_ring.so.24.1 00:01:37.586 [238/267] Linking target lib/librte_timer.so.24.1 00:01:37.586 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:37.586 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:37.586 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:37.586 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:37.847 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:37.847 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:37.847 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:37.847 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:37.847 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:37.847 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:37.847 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:37.847 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:38.108 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:38.108 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:38.108 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:38.108 [254/267] Linking target lib/librte_net.so.24.1 00:01:38.108 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:38.108 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:38.108 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:38.368 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:38.368 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:38.368 [260/267] Linking target lib/librte_hash.so.24.1 00:01:38.368 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:38.368 [262/267] Linking target lib/librte_security.so.24.1 00:01:38.368 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:38.629 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:38.629 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:38.629 [266/267] Linking target lib/librte_power.so.24.1 00:01:38.629 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:38.629 INFO: autodetecting backend as ninja 00:01:38.629 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:40.013 CC lib/log/log.o 00:01:40.013 CC lib/ut_mock/mock.o 00:01:40.013 CC lib/log/log_flags.o 00:01:40.013 CC lib/log/log_deprecated.o 00:01:40.013 CC lib/ut/ut.o 00:01:40.013 LIB libspdk_ut_mock.a 00:01:40.013 LIB libspdk_log.a 00:01:40.013 LIB libspdk_ut.a 00:01:40.013 SO libspdk_ut_mock.so.6.0 00:01:40.013 SO libspdk_log.so.7.0 00:01:40.013 SO libspdk_ut.so.2.0 00:01:40.013 SYMLINK libspdk_ut_mock.so 00:01:40.013 SYMLINK libspdk_log.so 00:01:40.013 SYMLINK libspdk_ut.so 00:01:40.273 CC lib/ioat/ioat.o 00:01:40.273 CC lib/util/base64.o 00:01:40.273 CC lib/util/bit_array.o 00:01:40.273 CC lib/util/cpuset.o 00:01:40.273 CC lib/util/crc16.o 00:01:40.273 CC lib/util/crc32.o 00:01:40.273 CC lib/util/crc32c.o 00:01:40.273 CC lib/util/crc32_ieee.o 00:01:40.273 CC lib/util/crc64.o 00:01:40.273 CC lib/util/dif.o 00:01:40.273 CC lib/util/fd.o 00:01:40.273 CC lib/util/hexlify.o 00:01:40.273 CC lib/util/file.o 00:01:40.273 CC lib/dma/dma.o 00:01:40.273 CC lib/util/iov.o 00:01:40.273 CC lib/util/math.o 00:01:40.273 CXX lib/trace_parser/trace.o 00:01:40.273 CC lib/util/pipe.o 00:01:40.273 CC lib/util/strerror_tls.o 00:01:40.273 CC lib/util/string.o 00:01:40.273 CC lib/util/uuid.o 00:01:40.273 CC lib/util/fd_group.o 00:01:40.273 CC lib/util/xor.o 00:01:40.273 CC lib/util/zipf.o 00:01:40.534 CC lib/vfio_user/host/vfio_user.o 00:01:40.534 CC lib/vfio_user/host/vfio_user_pci.o 00:01:40.534 LIB libspdk_dma.a 00:01:40.534 SO libspdk_dma.so.4.0 00:01:40.534 LIB libspdk_ioat.a 00:01:40.823 SO libspdk_ioat.so.7.0 00:01:40.823 SYMLINK libspdk_dma.so 00:01:40.823 SYMLINK libspdk_ioat.so 00:01:40.823 LIB libspdk_vfio_user.a 00:01:40.823 SO libspdk_vfio_user.so.5.0 00:01:40.823 LIB libspdk_util.a 00:01:40.823 SYMLINK libspdk_vfio_user.so 00:01:40.823 SO libspdk_util.so.9.1 00:01:41.115 SYMLINK libspdk_util.so 00:01:41.115 LIB libspdk_trace_parser.a 00:01:41.115 SO libspdk_trace_parser.so.5.0 00:01:41.375 SYMLINK libspdk_trace_parser.so 00:01:41.375 CC lib/conf/conf.o 00:01:41.375 CC lib/env_dpdk/env.o 00:01:41.375 CC lib/rdma_provider/common.o 00:01:41.375 CC lib/vmd/vmd.o 00:01:41.375 CC lib/env_dpdk/memory.o 00:01:41.375 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:41.375 CC lib/env_dpdk/pci.o 00:01:41.375 CC lib/vmd/led.o 00:01:41.375 CC lib/env_dpdk/init.o 00:01:41.375 CC lib/rdma_utils/rdma_utils.o 00:01:41.375 CC lib/env_dpdk/threads.o 00:01:41.375 CC lib/json/json_parse.o 00:01:41.375 CC lib/env_dpdk/pci_ioat.o 00:01:41.375 CC lib/json/json_util.o 00:01:41.375 CC lib/env_dpdk/pci_virtio.o 00:01:41.375 CC lib/json/json_write.o 00:01:41.375 CC lib/env_dpdk/pci_vmd.o 00:01:41.375 CC lib/env_dpdk/pci_idxd.o 00:01:41.375 CC lib/idxd/idxd_user.o 00:01:41.375 CC lib/env_dpdk/pci_event.o 00:01:41.375 CC lib/idxd/idxd.o 00:01:41.375 CC lib/env_dpdk/sigbus_handler.o 00:01:41.375 CC lib/idxd/idxd_kernel.o 00:01:41.375 CC lib/env_dpdk/pci_dpdk.o 00:01:41.375 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:41.375 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:41.634 LIB libspdk_rdma_provider.a 00:01:41.634 SO libspdk_rdma_provider.so.6.0 00:01:41.634 LIB libspdk_conf.a 00:01:41.634 LIB libspdk_json.a 00:01:41.634 SO libspdk_conf.so.6.0 00:01:41.634 LIB libspdk_rdma_utils.a 00:01:41.634 SYMLINK libspdk_rdma_provider.so 00:01:41.634 SO libspdk_rdma_utils.so.1.0 00:01:41.634 SO libspdk_json.so.6.0 00:01:41.634 SYMLINK libspdk_conf.so 00:01:41.893 SYMLINK libspdk_rdma_utils.so 00:01:41.893 SYMLINK libspdk_json.so 00:01:41.893 LIB libspdk_idxd.a 00:01:41.893 SO libspdk_idxd.so.12.0 00:01:41.893 LIB libspdk_vmd.a 00:01:41.893 SYMLINK libspdk_idxd.so 00:01:42.154 SO libspdk_vmd.so.6.0 00:01:42.154 SYMLINK libspdk_vmd.so 00:01:42.154 CC lib/jsonrpc/jsonrpc_server.o 00:01:42.154 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:42.154 CC lib/jsonrpc/jsonrpc_client.o 00:01:42.154 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:42.413 LIB libspdk_jsonrpc.a 00:01:42.413 SO libspdk_jsonrpc.so.6.0 00:01:42.413 SYMLINK libspdk_jsonrpc.so 00:01:42.674 LIB libspdk_env_dpdk.a 00:01:42.674 SO libspdk_env_dpdk.so.14.1 00:01:42.936 SYMLINK libspdk_env_dpdk.so 00:01:42.936 CC lib/rpc/rpc.o 00:01:42.936 LIB libspdk_rpc.a 00:01:43.198 SO libspdk_rpc.so.6.0 00:01:43.198 SYMLINK libspdk_rpc.so 00:01:43.459 CC lib/notify/notify.o 00:01:43.459 CC lib/trace/trace.o 00:01:43.459 CC lib/notify/notify_rpc.o 00:01:43.459 CC lib/trace/trace_flags.o 00:01:43.459 CC lib/trace/trace_rpc.o 00:01:43.459 CC lib/keyring/keyring.o 00:01:43.459 CC lib/keyring/keyring_rpc.o 00:01:43.720 LIB libspdk_notify.a 00:01:43.720 SO libspdk_notify.so.6.0 00:01:43.720 LIB libspdk_keyring.a 00:01:43.720 LIB libspdk_trace.a 00:01:43.720 SYMLINK libspdk_notify.so 00:01:43.720 SO libspdk_keyring.so.1.0 00:01:43.720 SO libspdk_trace.so.10.0 00:01:43.982 SYMLINK libspdk_keyring.so 00:01:43.982 SYMLINK libspdk_trace.so 00:01:44.243 CC lib/thread/thread.o 00:01:44.243 CC lib/thread/iobuf.o 00:01:44.243 CC lib/sock/sock.o 00:01:44.243 CC lib/sock/sock_rpc.o 00:01:44.505 LIB libspdk_sock.a 00:01:44.766 SO libspdk_sock.so.10.0 00:01:44.766 SYMLINK libspdk_sock.so 00:01:45.027 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:45.027 CC lib/nvme/nvme_ctrlr.o 00:01:45.027 CC lib/nvme/nvme_fabric.o 00:01:45.027 CC lib/nvme/nvme_ns_cmd.o 00:01:45.027 CC lib/nvme/nvme_ns.o 00:01:45.027 CC lib/nvme/nvme_pcie_common.o 00:01:45.027 CC lib/nvme/nvme_pcie.o 00:01:45.027 CC lib/nvme/nvme_qpair.o 00:01:45.027 CC lib/nvme/nvme.o 00:01:45.027 CC lib/nvme/nvme_quirks.o 00:01:45.027 CC lib/nvme/nvme_transport.o 00:01:45.027 CC lib/nvme/nvme_discovery.o 00:01:45.027 CC lib/nvme/nvme_tcp.o 00:01:45.027 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:45.027 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:45.027 CC lib/nvme/nvme_opal.o 00:01:45.027 CC lib/nvme/nvme_io_msg.o 00:01:45.027 CC lib/nvme/nvme_poll_group.o 00:01:45.027 CC lib/nvme/nvme_zns.o 00:01:45.027 CC lib/nvme/nvme_stubs.o 00:01:45.027 CC lib/nvme/nvme_auth.o 00:01:45.027 CC lib/nvme/nvme_cuse.o 00:01:45.027 CC lib/nvme/nvme_vfio_user.o 00:01:45.027 CC lib/nvme/nvme_rdma.o 00:01:45.596 LIB libspdk_thread.a 00:01:45.596 SO libspdk_thread.so.10.1 00:01:45.596 SYMLINK libspdk_thread.so 00:01:45.857 CC lib/init/json_config.o 00:01:45.857 CC lib/init/subsystem.o 00:01:45.857 CC lib/init/subsystem_rpc.o 00:01:45.857 CC lib/init/rpc.o 00:01:45.857 CC lib/blob/blobstore.o 00:01:45.857 CC lib/blob/request.o 00:01:45.857 CC lib/blob/zeroes.o 00:01:45.857 CC lib/blob/blob_bs_dev.o 00:01:45.857 CC lib/accel/accel_rpc.o 00:01:45.857 CC lib/accel/accel.o 00:01:45.857 CC lib/vfu_tgt/tgt_endpoint.o 00:01:45.857 CC lib/vfu_tgt/tgt_rpc.o 00:01:45.857 CC lib/accel/accel_sw.o 00:01:45.857 CC lib/virtio/virtio.o 00:01:45.857 CC lib/virtio/virtio_pci.o 00:01:45.857 CC lib/virtio/virtio_vhost_user.o 00:01:45.857 CC lib/virtio/virtio_vfio_user.o 00:01:46.118 LIB libspdk_init.a 00:01:46.118 SO libspdk_init.so.5.0 00:01:46.118 LIB libspdk_vfu_tgt.a 00:01:46.118 LIB libspdk_virtio.a 00:01:46.380 SO libspdk_vfu_tgt.so.3.0 00:01:46.380 SYMLINK libspdk_init.so 00:01:46.380 SO libspdk_virtio.so.7.0 00:01:46.380 SYMLINK libspdk_vfu_tgt.so 00:01:46.380 SYMLINK libspdk_virtio.so 00:01:46.640 CC lib/event/app.o 00:01:46.640 CC lib/event/reactor.o 00:01:46.640 CC lib/event/log_rpc.o 00:01:46.640 CC lib/event/app_rpc.o 00:01:46.640 CC lib/event/scheduler_static.o 00:01:46.640 LIB libspdk_accel.a 00:01:46.901 SO libspdk_accel.so.15.1 00:01:46.901 SYMLINK libspdk_accel.so 00:01:46.901 LIB libspdk_nvme.a 00:01:46.901 LIB libspdk_event.a 00:01:47.162 SO libspdk_nvme.so.13.1 00:01:47.162 SO libspdk_event.so.14.0 00:01:47.162 SYMLINK libspdk_event.so 00:01:47.162 CC lib/bdev/bdev.o 00:01:47.162 CC lib/bdev/bdev_rpc.o 00:01:47.162 CC lib/bdev/bdev_zone.o 00:01:47.162 CC lib/bdev/part.o 00:01:47.162 CC lib/bdev/scsi_nvme.o 00:01:47.423 SYMLINK libspdk_nvme.so 00:01:48.366 LIB libspdk_blob.a 00:01:48.366 SO libspdk_blob.so.11.0 00:01:48.627 SYMLINK libspdk_blob.so 00:01:48.888 CC lib/lvol/lvol.o 00:01:48.888 CC lib/blobfs/blobfs.o 00:01:48.888 CC lib/blobfs/tree.o 00:01:49.461 LIB libspdk_bdev.a 00:01:49.461 SO libspdk_bdev.so.15.1 00:01:49.721 SYMLINK libspdk_bdev.so 00:01:49.721 LIB libspdk_blobfs.a 00:01:49.721 SO libspdk_blobfs.so.10.0 00:01:49.721 LIB libspdk_lvol.a 00:01:49.721 SYMLINK libspdk_blobfs.so 00:01:49.721 SO libspdk_lvol.so.10.0 00:01:49.980 SYMLINK libspdk_lvol.so 00:01:49.980 CC lib/nvmf/ctrlr.o 00:01:49.980 CC lib/nvmf/ctrlr_discovery.o 00:01:49.980 CC lib/nvmf/ctrlr_bdev.o 00:01:49.980 CC lib/nvmf/subsystem.o 00:01:49.980 CC lib/nvmf/nvmf.o 00:01:49.980 CC lib/nvmf/nvmf_rpc.o 00:01:49.980 CC lib/nvmf/transport.o 00:01:49.980 CC lib/nvmf/tcp.o 00:01:49.980 CC lib/ftl/ftl_core.o 00:01:49.980 CC lib/ftl/ftl_layout.o 00:01:49.980 CC lib/ftl/ftl_init.o 00:01:49.980 CC lib/nvmf/stubs.o 00:01:49.980 CC lib/nvmf/vfio_user.o 00:01:49.980 CC lib/ftl/ftl_io.o 00:01:49.980 CC lib/nvmf/mdns_server.o 00:01:49.980 CC lib/ftl/ftl_debug.o 00:01:49.980 CC lib/ftl/ftl_sb.o 00:01:49.980 CC lib/scsi/dev.o 00:01:49.980 CC lib/nvmf/rdma.o 00:01:49.980 CC lib/ftl/ftl_l2p.o 00:01:49.980 CC lib/scsi/lun.o 00:01:49.980 CC lib/ftl/ftl_l2p_flat.o 00:01:49.980 CC lib/nvmf/auth.o 00:01:49.980 CC lib/ftl/ftl_nv_cache.o 00:01:49.980 CC lib/scsi/port.o 00:01:49.980 CC lib/ftl/ftl_band.o 00:01:49.980 CC lib/ublk/ublk.o 00:01:49.980 CC lib/scsi/scsi.o 00:01:49.980 CC lib/ftl/ftl_band_ops.o 00:01:49.980 CC lib/scsi/scsi_bdev.o 00:01:49.980 CC lib/nbd/nbd.o 00:01:49.980 CC lib/ublk/ublk_rpc.o 00:01:49.980 CC lib/ftl/ftl_writer.o 00:01:49.980 CC lib/scsi/scsi_pr.o 00:01:49.980 CC lib/nbd/nbd_rpc.o 00:01:49.980 CC lib/ftl/ftl_rq.o 00:01:49.980 CC lib/scsi/scsi_rpc.o 00:01:49.980 CC lib/scsi/task.o 00:01:49.980 CC lib/ftl/ftl_reloc.o 00:01:49.980 CC lib/ftl/ftl_l2p_cache.o 00:01:49.980 CC lib/ftl/ftl_p2l.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:49.980 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:49.981 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:49.981 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:49.981 CC lib/ftl/utils/ftl_conf.o 00:01:49.981 CC lib/ftl/utils/ftl_md.o 00:01:49.981 CC lib/ftl/utils/ftl_mempool.o 00:01:49.981 CC lib/ftl/utils/ftl_bitmap.o 00:01:49.981 CC lib/ftl/utils/ftl_property.o 00:01:49.981 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:49.981 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:49.981 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:49.981 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:49.981 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:49.981 CC lib/ftl/base/ftl_base_dev.o 00:01:49.981 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:49.981 CC lib/ftl/base/ftl_base_bdev.o 00:01:49.981 CC lib/ftl/ftl_trace.o 00:01:50.549 LIB libspdk_nbd.a 00:01:50.549 LIB libspdk_scsi.a 00:01:50.549 SO libspdk_nbd.so.7.0 00:01:50.549 SO libspdk_scsi.so.9.0 00:01:50.549 SYMLINK libspdk_nbd.so 00:01:50.809 LIB libspdk_ublk.a 00:01:50.809 SYMLINK libspdk_scsi.so 00:01:50.809 SO libspdk_ublk.so.3.0 00:01:50.809 SYMLINK libspdk_ublk.so 00:01:51.069 LIB libspdk_ftl.a 00:01:51.069 CC lib/iscsi/conn.o 00:01:51.069 CC lib/iscsi/init_grp.o 00:01:51.069 CC lib/iscsi/iscsi.o 00:01:51.069 CC lib/iscsi/md5.o 00:01:51.069 CC lib/iscsi/param.o 00:01:51.069 CC lib/iscsi/portal_grp.o 00:01:51.069 CC lib/iscsi/tgt_node.o 00:01:51.069 CC lib/iscsi/iscsi_subsystem.o 00:01:51.069 CC lib/iscsi/iscsi_rpc.o 00:01:51.069 CC lib/vhost/vhost.o 00:01:51.069 CC lib/iscsi/task.o 00:01:51.069 CC lib/vhost/vhost_rpc.o 00:01:51.069 CC lib/vhost/vhost_scsi.o 00:01:51.069 CC lib/vhost/vhost_blk.o 00:01:51.069 CC lib/vhost/rte_vhost_user.o 00:01:51.069 SO libspdk_ftl.so.9.0 00:01:51.639 SYMLINK libspdk_ftl.so 00:01:51.930 LIB libspdk_nvmf.a 00:01:51.930 SO libspdk_nvmf.so.18.1 00:01:51.930 LIB libspdk_vhost.a 00:01:52.190 SO libspdk_vhost.so.8.0 00:01:52.190 SYMLINK libspdk_nvmf.so 00:01:52.190 SYMLINK libspdk_vhost.so 00:01:52.190 LIB libspdk_iscsi.a 00:01:52.190 SO libspdk_iscsi.so.8.0 00:01:52.450 SYMLINK libspdk_iscsi.so 00:01:53.020 CC module/env_dpdk/env_dpdk_rpc.o 00:01:53.020 CC module/vfu_device/vfu_virtio.o 00:01:53.020 CC module/vfu_device/vfu_virtio_blk.o 00:01:53.020 CC module/vfu_device/vfu_virtio_scsi.o 00:01:53.020 CC module/vfu_device/vfu_virtio_rpc.o 00:01:53.280 LIB libspdk_env_dpdk_rpc.a 00:01:53.280 CC module/sock/posix/posix.o 00:01:53.280 CC module/accel/iaa/accel_iaa.o 00:01:53.280 CC module/accel/dsa/accel_dsa.o 00:01:53.280 CC module/accel/iaa/accel_iaa_rpc.o 00:01:53.280 CC module/accel/dsa/accel_dsa_rpc.o 00:01:53.280 CC module/keyring/file/keyring.o 00:01:53.280 CC module/keyring/file/keyring_rpc.o 00:01:53.280 CC module/blob/bdev/blob_bdev.o 00:01:53.280 CC module/accel/error/accel_error.o 00:01:53.280 CC module/accel/error/accel_error_rpc.o 00:01:53.280 CC module/accel/ioat/accel_ioat.o 00:01:53.280 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:53.280 CC module/accel/ioat/accel_ioat_rpc.o 00:01:53.280 CC module/keyring/linux/keyring.o 00:01:53.280 CC module/scheduler/gscheduler/gscheduler.o 00:01:53.280 CC module/keyring/linux/keyring_rpc.o 00:01:53.280 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:53.280 SO libspdk_env_dpdk_rpc.so.6.0 00:01:53.280 SYMLINK libspdk_env_dpdk_rpc.so 00:01:53.280 LIB libspdk_scheduler_dpdk_governor.a 00:01:53.280 LIB libspdk_scheduler_gscheduler.a 00:01:53.280 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:53.280 LIB libspdk_keyring_linux.a 00:01:53.280 LIB libspdk_keyring_file.a 00:01:53.280 LIB libspdk_accel_ioat.a 00:01:53.280 LIB libspdk_scheduler_dynamic.a 00:01:53.280 LIB libspdk_accel_error.a 00:01:53.280 SO libspdk_scheduler_gscheduler.so.4.0 00:01:53.280 SO libspdk_keyring_linux.so.1.0 00:01:53.280 LIB libspdk_accel_iaa.a 00:01:53.280 SO libspdk_keyring_file.so.1.0 00:01:53.540 SO libspdk_scheduler_dynamic.so.4.0 00:01:53.540 SO libspdk_accel_ioat.so.6.0 00:01:53.540 SO libspdk_accel_error.so.2.0 00:01:53.540 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:53.540 LIB libspdk_accel_dsa.a 00:01:53.540 SO libspdk_accel_iaa.so.3.0 00:01:53.540 SYMLINK libspdk_scheduler_gscheduler.so 00:01:53.540 SO libspdk_accel_dsa.so.5.0 00:01:53.540 LIB libspdk_blob_bdev.a 00:01:53.540 SYMLINK libspdk_keyring_linux.so 00:01:53.540 SYMLINK libspdk_keyring_file.so 00:01:53.540 SYMLINK libspdk_scheduler_dynamic.so 00:01:53.540 SYMLINK libspdk_accel_error.so 00:01:53.540 SYMLINK libspdk_accel_ioat.so 00:01:53.540 SO libspdk_blob_bdev.so.11.0 00:01:53.540 SYMLINK libspdk_accel_iaa.so 00:01:53.540 SYMLINK libspdk_accel_dsa.so 00:01:53.540 LIB libspdk_vfu_device.a 00:01:53.540 SYMLINK libspdk_blob_bdev.so 00:01:53.540 SO libspdk_vfu_device.so.3.0 00:01:53.801 SYMLINK libspdk_vfu_device.so 00:01:53.801 LIB libspdk_sock_posix.a 00:01:53.801 SO libspdk_sock_posix.so.6.0 00:01:54.061 SYMLINK libspdk_sock_posix.so 00:01:54.061 CC module/bdev/nvme/bdev_nvme.o 00:01:54.061 CC module/bdev/gpt/gpt.o 00:01:54.061 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:54.061 CC module/bdev/lvol/vbdev_lvol.o 00:01:54.061 CC module/bdev/nvme/nvme_rpc.o 00:01:54.062 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:54.062 CC module/bdev/gpt/vbdev_gpt.o 00:01:54.062 CC module/bdev/nvme/bdev_mdns_client.o 00:01:54.062 CC module/bdev/delay/vbdev_delay.o 00:01:54.062 CC module/bdev/nvme/vbdev_opal.o 00:01:54.062 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:54.062 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:54.062 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:54.062 CC module/bdev/malloc/bdev_malloc.o 00:01:54.062 CC module/bdev/raid/bdev_raid.o 00:01:54.062 CC module/bdev/passthru/vbdev_passthru.o 00:01:54.062 CC module/bdev/error/vbdev_error.o 00:01:54.062 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:54.062 CC module/bdev/raid/bdev_raid_rpc.o 00:01:54.062 CC module/bdev/raid/bdev_raid_sb.o 00:01:54.062 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:54.062 CC module/bdev/error/vbdev_error_rpc.o 00:01:54.062 CC module/bdev/aio/bdev_aio.o 00:01:54.062 CC module/bdev/raid/raid0.o 00:01:54.062 CC module/bdev/null/bdev_null.o 00:01:54.062 CC module/bdev/aio/bdev_aio_rpc.o 00:01:54.062 CC module/bdev/raid/raid1.o 00:01:54.062 CC module/bdev/null/bdev_null_rpc.o 00:01:54.062 CC module/bdev/raid/concat.o 00:01:54.062 CC module/bdev/iscsi/bdev_iscsi.o 00:01:54.062 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:54.062 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:54.062 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:54.062 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:54.062 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:54.062 CC module/bdev/ftl/bdev_ftl.o 00:01:54.062 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:54.062 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:54.062 CC module/bdev/split/vbdev_split.o 00:01:54.062 CC module/bdev/split/vbdev_split_rpc.o 00:01:54.062 CC module/blobfs/bdev/blobfs_bdev.o 00:01:54.062 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:54.322 LIB libspdk_blobfs_bdev.a 00:01:54.322 LIB libspdk_bdev_split.a 00:01:54.322 LIB libspdk_bdev_null.a 00:01:54.322 SO libspdk_blobfs_bdev.so.6.0 00:01:54.322 LIB libspdk_bdev_error.a 00:01:54.322 LIB libspdk_bdev_gpt.a 00:01:54.322 SO libspdk_bdev_split.so.6.0 00:01:54.322 LIB libspdk_bdev_ftl.a 00:01:54.583 SO libspdk_bdev_null.so.6.0 00:01:54.583 SYMLINK libspdk_blobfs_bdev.so 00:01:54.583 LIB libspdk_bdev_passthru.a 00:01:54.583 LIB libspdk_bdev_zone_block.a 00:01:54.583 SO libspdk_bdev_gpt.so.6.0 00:01:54.583 SO libspdk_bdev_error.so.6.0 00:01:54.583 SO libspdk_bdev_ftl.so.6.0 00:01:54.583 SYMLINK libspdk_bdev_split.so 00:01:54.583 LIB libspdk_bdev_aio.a 00:01:54.583 LIB libspdk_bdev_iscsi.a 00:01:54.583 LIB libspdk_bdev_malloc.a 00:01:54.583 SO libspdk_bdev_passthru.so.6.0 00:01:54.583 LIB libspdk_bdev_delay.a 00:01:54.583 SO libspdk_bdev_zone_block.so.6.0 00:01:54.583 SYMLINK libspdk_bdev_null.so 00:01:54.583 SO libspdk_bdev_aio.so.6.0 00:01:54.583 SO libspdk_bdev_iscsi.so.6.0 00:01:54.583 SYMLINK libspdk_bdev_gpt.so 00:01:54.583 SO libspdk_bdev_malloc.so.6.0 00:01:54.583 SYMLINK libspdk_bdev_error.so 00:01:54.583 SO libspdk_bdev_delay.so.6.0 00:01:54.583 SYMLINK libspdk_bdev_ftl.so 00:01:54.583 SYMLINK libspdk_bdev_passthru.so 00:01:54.583 SYMLINK libspdk_bdev_malloc.so 00:01:54.583 SYMLINK libspdk_bdev_zone_block.so 00:01:54.583 SYMLINK libspdk_bdev_aio.so 00:01:54.583 SYMLINK libspdk_bdev_iscsi.so 00:01:54.583 LIB libspdk_bdev_lvol.a 00:01:54.583 SYMLINK libspdk_bdev_delay.so 00:01:54.583 SO libspdk_bdev_lvol.so.6.0 00:01:54.583 LIB libspdk_bdev_virtio.a 00:01:54.583 SO libspdk_bdev_virtio.so.6.0 00:01:54.844 SYMLINK libspdk_bdev_lvol.so 00:01:54.844 SYMLINK libspdk_bdev_virtio.so 00:01:55.104 LIB libspdk_bdev_raid.a 00:01:55.104 SO libspdk_bdev_raid.so.6.0 00:01:55.104 SYMLINK libspdk_bdev_raid.so 00:01:56.045 LIB libspdk_bdev_nvme.a 00:01:56.046 SO libspdk_bdev_nvme.so.7.0 00:01:56.306 SYMLINK libspdk_bdev_nvme.so 00:01:56.877 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:56.877 CC module/event/subsystems/iobuf/iobuf.o 00:01:56.877 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:56.877 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:56.877 CC module/event/subsystems/sock/sock.o 00:01:56.877 CC module/event/subsystems/vmd/vmd.o 00:01:56.877 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:56.877 CC module/event/subsystems/keyring/keyring.o 00:01:56.877 CC module/event/subsystems/scheduler/scheduler.o 00:01:57.139 LIB libspdk_event_vhost_blk.a 00:01:57.139 LIB libspdk_event_vmd.a 00:01:57.139 LIB libspdk_event_iobuf.a 00:01:57.139 LIB libspdk_event_keyring.a 00:01:57.139 LIB libspdk_event_vfu_tgt.a 00:01:57.139 LIB libspdk_event_scheduler.a 00:01:57.139 LIB libspdk_event_sock.a 00:01:57.139 SO libspdk_event_vhost_blk.so.3.0 00:01:57.139 SO libspdk_event_vmd.so.6.0 00:01:57.139 SO libspdk_event_iobuf.so.3.0 00:01:57.139 SO libspdk_event_vfu_tgt.so.3.0 00:01:57.139 SO libspdk_event_sock.so.5.0 00:01:57.139 SO libspdk_event_keyring.so.1.0 00:01:57.139 SO libspdk_event_scheduler.so.4.0 00:01:57.139 SYMLINK libspdk_event_vhost_blk.so 00:01:57.139 SYMLINK libspdk_event_vmd.so 00:01:57.139 SYMLINK libspdk_event_sock.so 00:01:57.139 SYMLINK libspdk_event_vfu_tgt.so 00:01:57.139 SYMLINK libspdk_event_iobuf.so 00:01:57.139 SYMLINK libspdk_event_scheduler.so 00:01:57.139 SYMLINK libspdk_event_keyring.so 00:01:57.400 CC module/event/subsystems/accel/accel.o 00:01:57.661 LIB libspdk_event_accel.a 00:01:57.661 SO libspdk_event_accel.so.6.0 00:01:57.661 SYMLINK libspdk_event_accel.so 00:01:58.233 CC module/event/subsystems/bdev/bdev.o 00:01:58.233 LIB libspdk_event_bdev.a 00:01:58.234 SO libspdk_event_bdev.so.6.0 00:01:58.495 SYMLINK libspdk_event_bdev.so 00:01:58.755 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:58.755 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:58.755 CC module/event/subsystems/scsi/scsi.o 00:01:58.755 CC module/event/subsystems/ublk/ublk.o 00:01:58.755 CC module/event/subsystems/nbd/nbd.o 00:01:59.014 LIB libspdk_event_nbd.a 00:01:59.014 LIB libspdk_event_ublk.a 00:01:59.014 LIB libspdk_event_scsi.a 00:01:59.014 SO libspdk_event_nbd.so.6.0 00:01:59.014 SO libspdk_event_ublk.so.3.0 00:01:59.014 SO libspdk_event_scsi.so.6.0 00:01:59.014 LIB libspdk_event_nvmf.a 00:01:59.014 SYMLINK libspdk_event_nbd.so 00:01:59.014 SO libspdk_event_nvmf.so.6.0 00:01:59.014 SYMLINK libspdk_event_ublk.so 00:01:59.014 SYMLINK libspdk_event_scsi.so 00:01:59.014 SYMLINK libspdk_event_nvmf.so 00:01:59.275 CC module/event/subsystems/iscsi/iscsi.o 00:01:59.275 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:59.557 LIB libspdk_event_vhost_scsi.a 00:01:59.557 LIB libspdk_event_iscsi.a 00:01:59.557 SO libspdk_event_vhost_scsi.so.3.0 00:01:59.557 SO libspdk_event_iscsi.so.6.0 00:01:59.557 SYMLINK libspdk_event_vhost_scsi.so 00:01:59.557 SYMLINK libspdk_event_iscsi.so 00:01:59.818 SO libspdk.so.6.0 00:01:59.818 SYMLINK libspdk.so 00:02:00.389 CXX app/trace/trace.o 00:02:00.389 CC app/spdk_lspci/spdk_lspci.o 00:02:00.389 CC app/spdk_nvme_identify/identify.o 00:02:00.389 CC app/spdk_top/spdk_top.o 00:02:00.389 CC app/trace_record/trace_record.o 00:02:00.389 CC app/spdk_nvme_perf/perf.o 00:02:00.389 CC test/rpc_client/rpc_client_test.o 00:02:00.389 CC app/spdk_nvme_discover/discovery_aer.o 00:02:00.389 TEST_HEADER include/spdk/accel.h 00:02:00.389 TEST_HEADER include/spdk/accel_module.h 00:02:00.389 TEST_HEADER include/spdk/barrier.h 00:02:00.389 TEST_HEADER include/spdk/assert.h 00:02:00.389 TEST_HEADER include/spdk/base64.h 00:02:00.389 TEST_HEADER include/spdk/bdev.h 00:02:00.389 TEST_HEADER include/spdk/bdev_zone.h 00:02:00.389 TEST_HEADER include/spdk/bdev_module.h 00:02:00.389 TEST_HEADER include/spdk/bit_array.h 00:02:00.389 TEST_HEADER include/spdk/bit_pool.h 00:02:00.389 TEST_HEADER include/spdk/blob_bdev.h 00:02:00.389 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:00.389 TEST_HEADER include/spdk/blobfs.h 00:02:00.389 TEST_HEADER include/spdk/conf.h 00:02:00.389 TEST_HEADER include/spdk/blob.h 00:02:00.389 TEST_HEADER include/spdk/config.h 00:02:00.389 TEST_HEADER include/spdk/cpuset.h 00:02:00.389 TEST_HEADER include/spdk/crc16.h 00:02:00.389 TEST_HEADER include/spdk/crc32.h 00:02:00.389 CC app/iscsi_tgt/iscsi_tgt.o 00:02:00.389 TEST_HEADER include/spdk/crc64.h 00:02:00.389 TEST_HEADER include/spdk/dif.h 00:02:00.389 TEST_HEADER include/spdk/dma.h 00:02:00.389 TEST_HEADER include/spdk/endian.h 00:02:00.389 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:00.389 TEST_HEADER include/spdk/env_dpdk.h 00:02:00.389 CC app/spdk_dd/spdk_dd.o 00:02:00.389 TEST_HEADER include/spdk/env.h 00:02:00.389 CC app/nvmf_tgt/nvmf_main.o 00:02:00.389 TEST_HEADER include/spdk/event.h 00:02:00.389 TEST_HEADER include/spdk/fd.h 00:02:00.389 TEST_HEADER include/spdk/fd_group.h 00:02:00.389 TEST_HEADER include/spdk/file.h 00:02:00.389 TEST_HEADER include/spdk/ftl.h 00:02:00.389 TEST_HEADER include/spdk/hexlify.h 00:02:00.389 TEST_HEADER include/spdk/gpt_spec.h 00:02:00.389 TEST_HEADER include/spdk/histogram_data.h 00:02:00.389 TEST_HEADER include/spdk/idxd.h 00:02:00.389 TEST_HEADER include/spdk/idxd_spec.h 00:02:00.389 TEST_HEADER include/spdk/init.h 00:02:00.389 TEST_HEADER include/spdk/ioat.h 00:02:00.389 TEST_HEADER include/spdk/ioat_spec.h 00:02:00.389 TEST_HEADER include/spdk/iscsi_spec.h 00:02:00.389 TEST_HEADER include/spdk/jsonrpc.h 00:02:00.389 TEST_HEADER include/spdk/json.h 00:02:00.389 TEST_HEADER include/spdk/keyring.h 00:02:00.389 TEST_HEADER include/spdk/keyring_module.h 00:02:00.389 TEST_HEADER include/spdk/likely.h 00:02:00.389 TEST_HEADER include/spdk/log.h 00:02:00.389 TEST_HEADER include/spdk/lvol.h 00:02:00.389 TEST_HEADER include/spdk/memory.h 00:02:00.389 TEST_HEADER include/spdk/mmio.h 00:02:00.389 TEST_HEADER include/spdk/notify.h 00:02:00.389 TEST_HEADER include/spdk/nbd.h 00:02:00.389 TEST_HEADER include/spdk/nvme.h 00:02:00.389 CC app/spdk_tgt/spdk_tgt.o 00:02:00.389 TEST_HEADER include/spdk/nvme_intel.h 00:02:00.389 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:00.389 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:00.389 TEST_HEADER include/spdk/nvme_spec.h 00:02:00.389 TEST_HEADER include/spdk/nvme_zns.h 00:02:00.389 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:00.389 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:00.389 TEST_HEADER include/spdk/nvmf_spec.h 00:02:00.389 TEST_HEADER include/spdk/nvmf.h 00:02:00.389 TEST_HEADER include/spdk/nvmf_transport.h 00:02:00.389 TEST_HEADER include/spdk/opal.h 00:02:00.389 TEST_HEADER include/spdk/opal_spec.h 00:02:00.389 TEST_HEADER include/spdk/pipe.h 00:02:00.389 TEST_HEADER include/spdk/pci_ids.h 00:02:00.389 TEST_HEADER include/spdk/queue.h 00:02:00.389 TEST_HEADER include/spdk/reduce.h 00:02:00.389 TEST_HEADER include/spdk/rpc.h 00:02:00.389 TEST_HEADER include/spdk/scsi.h 00:02:00.389 TEST_HEADER include/spdk/scheduler.h 00:02:00.389 TEST_HEADER include/spdk/sock.h 00:02:00.389 TEST_HEADER include/spdk/scsi_spec.h 00:02:00.389 TEST_HEADER include/spdk/stdinc.h 00:02:00.389 TEST_HEADER include/spdk/string.h 00:02:00.389 TEST_HEADER include/spdk/trace.h 00:02:00.389 TEST_HEADER include/spdk/thread.h 00:02:00.389 TEST_HEADER include/spdk/trace_parser.h 00:02:00.389 TEST_HEADER include/spdk/tree.h 00:02:00.389 TEST_HEADER include/spdk/ublk.h 00:02:00.389 TEST_HEADER include/spdk/util.h 00:02:00.389 TEST_HEADER include/spdk/uuid.h 00:02:00.389 TEST_HEADER include/spdk/version.h 00:02:00.389 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:00.389 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:00.389 TEST_HEADER include/spdk/xor.h 00:02:00.389 TEST_HEADER include/spdk/vhost.h 00:02:00.389 TEST_HEADER include/spdk/vmd.h 00:02:00.389 TEST_HEADER include/spdk/zipf.h 00:02:00.389 CXX test/cpp_headers/accel.o 00:02:00.389 CXX test/cpp_headers/accel_module.o 00:02:00.389 CXX test/cpp_headers/assert.o 00:02:00.389 CXX test/cpp_headers/barrier.o 00:02:00.389 CXX test/cpp_headers/base64.o 00:02:00.389 CXX test/cpp_headers/bdev.o 00:02:00.389 CXX test/cpp_headers/bdev_module.o 00:02:00.389 CXX test/cpp_headers/bit_array.o 00:02:00.389 CXX test/cpp_headers/bdev_zone.o 00:02:00.389 CXX test/cpp_headers/bit_pool.o 00:02:00.389 CXX test/cpp_headers/blobfs_bdev.o 00:02:00.389 CXX test/cpp_headers/blob_bdev.o 00:02:00.389 CXX test/cpp_headers/blobfs.o 00:02:00.389 CXX test/cpp_headers/blob.o 00:02:00.389 CXX test/cpp_headers/config.o 00:02:00.389 CXX test/cpp_headers/conf.o 00:02:00.389 CXX test/cpp_headers/cpuset.o 00:02:00.389 CXX test/cpp_headers/crc16.o 00:02:00.389 CXX test/cpp_headers/crc32.o 00:02:00.389 CXX test/cpp_headers/crc64.o 00:02:00.389 CXX test/cpp_headers/dif.o 00:02:00.389 CXX test/cpp_headers/dma.o 00:02:00.389 CXX test/cpp_headers/endian.o 00:02:00.389 CXX test/cpp_headers/env.o 00:02:00.389 CXX test/cpp_headers/env_dpdk.o 00:02:00.389 CXX test/cpp_headers/event.o 00:02:00.389 CXX test/cpp_headers/fd_group.o 00:02:00.389 CXX test/cpp_headers/ftl.o 00:02:00.389 CXX test/cpp_headers/fd.o 00:02:00.389 CXX test/cpp_headers/file.o 00:02:00.389 CXX test/cpp_headers/gpt_spec.o 00:02:00.389 CXX test/cpp_headers/histogram_data.o 00:02:00.389 CXX test/cpp_headers/idxd_spec.o 00:02:00.389 CXX test/cpp_headers/hexlify.o 00:02:00.389 CXX test/cpp_headers/ioat.o 00:02:00.389 CXX test/cpp_headers/init.o 00:02:00.389 CXX test/cpp_headers/idxd.o 00:02:00.389 CXX test/cpp_headers/ioat_spec.o 00:02:00.389 CXX test/cpp_headers/iscsi_spec.o 00:02:00.389 CXX test/cpp_headers/jsonrpc.o 00:02:00.389 CXX test/cpp_headers/json.o 00:02:00.389 CXX test/cpp_headers/keyring.o 00:02:00.389 CXX test/cpp_headers/keyring_module.o 00:02:00.389 CXX test/cpp_headers/lvol.o 00:02:00.389 CXX test/cpp_headers/likely.o 00:02:00.389 CXX test/cpp_headers/log.o 00:02:00.389 CXX test/cpp_headers/nbd.o 00:02:00.390 CC examples/util/zipf/zipf.o 00:02:00.390 CXX test/cpp_headers/memory.o 00:02:00.390 CXX test/cpp_headers/nvme.o 00:02:00.390 CXX test/cpp_headers/mmio.o 00:02:00.390 CXX test/cpp_headers/notify.o 00:02:00.390 CXX test/cpp_headers/nvme_ocssd.o 00:02:00.390 CXX test/cpp_headers/nvme_intel.o 00:02:00.390 LINK spdk_lspci 00:02:00.390 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:00.390 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:00.390 CXX test/cpp_headers/nvme_spec.o 00:02:00.390 CXX test/cpp_headers/nvmf.o 00:02:00.390 CXX test/cpp_headers/nvme_zns.o 00:02:00.390 CXX test/cpp_headers/nvmf_cmd.o 00:02:00.390 CC test/app/histogram_perf/histogram_perf.o 00:02:00.390 CXX test/cpp_headers/nvmf_transport.o 00:02:00.390 CXX test/cpp_headers/opal.o 00:02:00.390 CC test/app/jsoncat/jsoncat.o 00:02:00.390 CXX test/cpp_headers/nvmf_spec.o 00:02:00.390 CC test/app/stub/stub.o 00:02:00.390 CXX test/cpp_headers/opal_spec.o 00:02:00.390 CC app/fio/nvme/fio_plugin.o 00:02:00.390 CXX test/cpp_headers/queue.o 00:02:00.390 CC test/env/vtophys/vtophys.o 00:02:00.390 CXX test/cpp_headers/pci_ids.o 00:02:00.390 CXX test/cpp_headers/pipe.o 00:02:00.390 CC test/env/memory/memory_ut.o 00:02:00.390 CXX test/cpp_headers/rpc.o 00:02:00.390 CXX test/cpp_headers/reduce.o 00:02:00.390 CXX test/cpp_headers/scheduler.o 00:02:00.390 CXX test/cpp_headers/scsi_spec.o 00:02:00.390 CXX test/cpp_headers/scsi.o 00:02:00.390 CC test/thread/poller_perf/poller_perf.o 00:02:00.390 CXX test/cpp_headers/sock.o 00:02:00.390 CXX test/cpp_headers/stdinc.o 00:02:00.390 CC examples/ioat/verify/verify.o 00:02:00.390 CXX test/cpp_headers/trace.o 00:02:00.390 CXX test/cpp_headers/tree.o 00:02:00.390 CXX test/cpp_headers/string.o 00:02:00.390 CXX test/cpp_headers/thread.o 00:02:00.390 CXX test/cpp_headers/trace_parser.o 00:02:00.390 CXX test/cpp_headers/version.o 00:02:00.390 CXX test/cpp_headers/util.o 00:02:00.390 CXX test/cpp_headers/ublk.o 00:02:00.390 CXX test/cpp_headers/uuid.o 00:02:00.390 CXX test/cpp_headers/vfio_user_pci.o 00:02:00.390 CXX test/cpp_headers/vfio_user_spec.o 00:02:00.390 CXX test/cpp_headers/vhost.o 00:02:00.390 CXX test/cpp_headers/vmd.o 00:02:00.390 CXX test/cpp_headers/xor.o 00:02:00.390 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:00.390 CXX test/cpp_headers/zipf.o 00:02:00.390 CC test/env/pci/pci_ut.o 00:02:00.390 CC examples/ioat/perf/perf.o 00:02:00.652 LINK rpc_client_test 00:02:00.652 CC test/app/bdev_svc/bdev_svc.o 00:02:00.652 CC test/dma/test_dma/test_dma.o 00:02:00.652 CC app/fio/bdev/fio_plugin.o 00:02:00.652 LINK spdk_nvme_discover 00:02:00.652 LINK interrupt_tgt 00:02:00.652 LINK nvmf_tgt 00:02:00.652 LINK iscsi_tgt 00:02:00.652 LINK spdk_trace_record 00:02:00.910 LINK spdk_tgt 00:02:00.910 CC test/env/mem_callbacks/mem_callbacks.o 00:02:00.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:00.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:00.910 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:00.910 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:00.910 LINK env_dpdk_post_init 00:02:00.910 LINK spdk_dd 00:02:00.910 LINK jsoncat 00:02:01.170 LINK histogram_perf 00:02:01.170 LINK zipf 00:02:01.170 LINK vtophys 00:02:01.170 LINK spdk_trace 00:02:01.170 LINK stub 00:02:01.170 LINK poller_perf 00:02:01.170 LINK verify 00:02:01.170 LINK bdev_svc 00:02:01.170 LINK test_dma 00:02:01.170 LINK ioat_perf 00:02:01.430 LINK pci_ut 00:02:01.430 LINK spdk_nvme_perf 00:02:01.430 LINK spdk_nvme 00:02:01.430 LINK vhost_fuzz 00:02:01.430 CC app/vhost/vhost.o 00:02:01.430 LINK nvme_fuzz 00:02:01.430 LINK spdk_bdev 00:02:01.430 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.430 CC examples/sock/hello_world/hello_sock.o 00:02:01.430 LINK spdk_nvme_identify 00:02:01.430 CC examples/idxd/perf/perf.o 00:02:01.430 CC examples/vmd/led/led.o 00:02:01.430 LINK spdk_top 00:02:01.690 CC test/event/reactor_perf/reactor_perf.o 00:02:01.690 CC test/event/reactor/reactor.o 00:02:01.690 CC test/event/event_perf/event_perf.o 00:02:01.690 CC examples/thread/thread/thread_ex.o 00:02:01.690 CC test/event/app_repeat/app_repeat.o 00:02:01.690 LINK mem_callbacks 00:02:01.690 CC test/event/scheduler/scheduler.o 00:02:01.690 LINK vhost 00:02:01.690 LINK led 00:02:01.690 LINK lsvmd 00:02:01.690 CC test/nvme/e2edp/nvme_dp.o 00:02:01.690 CC test/nvme/compliance/nvme_compliance.o 00:02:01.690 CC test/nvme/reset/reset.o 00:02:01.690 CC test/nvme/cuse/cuse.o 00:02:01.690 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.690 LINK event_perf 00:02:01.690 LINK reactor_perf 00:02:01.690 CC test/nvme/boot_partition/boot_partition.o 00:02:01.690 CC test/nvme/reserve/reserve.o 00:02:01.690 LINK reactor 00:02:01.690 CC test/nvme/simple_copy/simple_copy.o 00:02:01.690 CC test/nvme/aer/aer.o 00:02:01.690 CC test/nvme/overhead/overhead.o 00:02:01.690 CC test/nvme/startup/startup.o 00:02:01.690 CC test/nvme/sgl/sgl.o 00:02:01.690 CC test/nvme/fdp/fdp.o 00:02:01.690 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.690 CC test/nvme/connect_stress/connect_stress.o 00:02:01.690 CC test/nvme/err_injection/err_injection.o 00:02:01.690 CC test/blobfs/mkfs/mkfs.o 00:02:01.690 CC test/accel/dif/dif.o 00:02:01.690 LINK app_repeat 00:02:01.690 LINK hello_sock 00:02:01.950 LINK idxd_perf 00:02:01.950 LINK scheduler 00:02:01.950 LINK thread 00:02:01.950 CC test/lvol/esnap/esnap.o 00:02:01.950 LINK boot_partition 00:02:01.950 LINK doorbell_aers 00:02:01.950 LINK reserve 00:02:01.950 LINK startup 00:02:01.950 LINK err_injection 00:02:01.950 LINK nvme_dp 00:02:01.950 LINK connect_stress 00:02:01.950 LINK memory_ut 00:02:01.950 LINK fused_ordering 00:02:01.950 LINK reset 00:02:01.950 LINK mkfs 00:02:01.950 LINK simple_copy 00:02:01.950 LINK nvme_compliance 00:02:01.950 LINK overhead 00:02:01.950 LINK sgl 00:02:01.950 LINK aer 00:02:01.950 LINK fdp 00:02:02.210 LINK dif 00:02:02.210 CC examples/nvme/hotplug/hotplug.o 00:02:02.210 CC examples/nvme/reconnect/reconnect.o 00:02:02.210 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:02.210 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:02.210 CC examples/nvme/hello_world/hello_world.o 00:02:02.210 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:02.210 CC examples/nvme/arbitration/arbitration.o 00:02:02.210 CC examples/nvme/abort/abort.o 00:02:02.471 LINK iscsi_fuzz 00:02:02.471 CC examples/accel/perf/accel_perf.o 00:02:02.471 CC examples/blob/cli/blobcli.o 00:02:02.471 CC examples/blob/hello_world/hello_blob.o 00:02:02.471 LINK cmb_copy 00:02:02.471 LINK pmr_persistence 00:02:02.471 LINK hotplug 00:02:02.471 LINK hello_world 00:02:02.732 LINK reconnect 00:02:02.732 LINK arbitration 00:02:02.732 LINK abort 00:02:02.732 LINK hello_blob 00:02:02.732 LINK nvme_manage 00:02:02.732 CC test/bdev/bdevio/bdevio.o 00:02:02.732 LINK cuse 00:02:02.732 LINK accel_perf 00:02:02.993 LINK blobcli 00:02:03.254 LINK bdevio 00:02:03.515 CC examples/bdev/hello_world/hello_bdev.o 00:02:03.515 CC examples/bdev/bdevperf/bdevperf.o 00:02:03.776 LINK hello_bdev 00:02:04.037 LINK bdevperf 00:02:04.613 CC examples/nvmf/nvmf/nvmf.o 00:02:04.961 LINK nvmf 00:02:05.919 LINK esnap 00:02:06.179 00:02:06.179 real 0m50.706s 00:02:06.179 user 6m32.101s 00:02:06.179 sys 4m10.064s 00:02:06.179 12:47:27 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.179 12:47:27 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.179 ************************************ 00:02:06.179 END TEST make 00:02:06.179 ************************************ 00:02:06.441 12:47:28 -- common/autotest_common.sh@1142 -- $ return 0 00:02:06.441 12:47:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.441 12:47:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:06.441 12:47:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:06.441 12:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.441 12:47:28 -- pm/common@44 -- $ pid=328451 00:02:06.441 12:47:28 -- pm/common@50 -- $ kill -TERM 328451 00:02:06.441 12:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.441 12:47:28 -- pm/common@44 -- $ pid=328452 00:02:06.441 12:47:28 -- pm/common@50 -- $ kill -TERM 328452 00:02:06.441 12:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.441 12:47:28 -- pm/common@44 -- $ pid=328454 00:02:06.441 12:47:28 -- pm/common@50 -- $ kill -TERM 328454 00:02:06.441 12:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.441 12:47:28 -- pm/common@44 -- $ pid=328477 00:02:06.441 12:47:28 -- pm/common@50 -- $ sudo -E kill -TERM 328477 00:02:06.441 12:47:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.441 12:47:28 -- nvmf/common.sh@7 -- # uname -s 00:02:06.441 12:47:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.441 12:47:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.441 12:47:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.441 12:47:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.441 12:47:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.441 12:47:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.441 12:47:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.441 12:47:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.441 12:47:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.441 12:47:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.441 12:47:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:06.441 12:47:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:06.441 12:47:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.441 12:47:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.441 12:47:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:06.441 12:47:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:06.441 12:47:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.441 12:47:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.441 12:47:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.441 12:47:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.441 12:47:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.441 12:47:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.441 12:47:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.441 12:47:28 -- paths/export.sh@5 -- # export PATH 00:02:06.441 12:47:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.441 12:47:28 -- nvmf/common.sh@47 -- # : 0 00:02:06.441 12:47:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:06.441 12:47:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:06.441 12:47:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:06.441 12:47:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.441 12:47:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.441 12:47:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:06.441 12:47:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:06.441 12:47:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:06.441 12:47:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.441 12:47:28 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.441 12:47:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.441 12:47:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.441 12:47:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.441 12:47:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.441 12:47:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.441 12:47:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.441 12:47:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.441 12:47:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.441 12:47:28 -- spdk/autotest.sh@48 -- # udevadm_pid=391585 00:02:06.441 12:47:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:06.441 12:47:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.441 12:47:28 -- pm/common@17 -- # local monitor 00:02:06.441 12:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@21 -- # date +%s 00:02:06.441 12:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.441 12:47:28 -- pm/common@21 -- # date +%s 00:02:06.441 12:47:28 -- pm/common@25 -- # sleep 1 00:02:06.441 12:47:28 -- pm/common@21 -- # date +%s 00:02:06.441 12:47:28 -- pm/common@21 -- # date +%s 00:02:06.441 12:47:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040448 00:02:06.441 12:47:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040448 00:02:06.441 12:47:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040448 00:02:06.441 12:47:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040448 00:02:06.442 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040448_collect-vmstat.pm.log 00:02:06.442 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040448_collect-cpu-load.pm.log 00:02:06.442 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040448_collect-cpu-temp.pm.log 00:02:06.709 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040448_collect-bmc-pm.bmc.pm.log 00:02:07.655 12:47:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.655 12:47:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.655 12:47:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:07.655 12:47:29 -- common/autotest_common.sh@10 -- # set +x 00:02:07.655 12:47:29 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.655 12:47:29 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:07.655 12:47:29 -- common/autotest_common.sh@10 -- # set +x 00:02:07.655 12:47:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:07.655 12:47:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.655 12:47:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.655 12:47:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.655 12:47:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.655 12:47:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.655 12:47:29 -- common/autotest_common.sh@1455 -- # uname 00:02:07.655 12:47:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:07.655 12:47:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.655 12:47:29 -- common/autotest_common.sh@1475 -- # uname 00:02:07.655 12:47:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:07.655 12:47:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.655 12:47:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:07.655 12:47:29 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.655 12:47:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:07.655 12:47:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:07.655 --rc lcov_branch_coverage=1 00:02:07.655 --rc lcov_function_coverage=1 00:02:07.655 --rc genhtml_branch_coverage=1 00:02:07.655 --rc genhtml_function_coverage=1 00:02:07.655 --rc genhtml_legend=1 00:02:07.655 --rc geninfo_all_blocks=1 00:02:07.655 ' 00:02:07.655 12:47:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:07.655 --rc lcov_branch_coverage=1 00:02:07.655 --rc lcov_function_coverage=1 00:02:07.655 --rc genhtml_branch_coverage=1 00:02:07.655 --rc genhtml_function_coverage=1 00:02:07.655 --rc genhtml_legend=1 00:02:07.655 --rc geninfo_all_blocks=1 00:02:07.655 ' 00:02:07.655 12:47:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:07.655 --rc lcov_branch_coverage=1 00:02:07.655 --rc lcov_function_coverage=1 00:02:07.655 --rc genhtml_branch_coverage=1 00:02:07.655 --rc genhtml_function_coverage=1 00:02:07.655 --rc genhtml_legend=1 00:02:07.655 --rc geninfo_all_blocks=1 00:02:07.655 --no-external' 00:02:07.655 12:47:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:07.655 --rc lcov_branch_coverage=1 00:02:07.655 --rc lcov_function_coverage=1 00:02:07.655 --rc genhtml_branch_coverage=1 00:02:07.655 --rc genhtml_function_coverage=1 00:02:07.655 --rc genhtml_legend=1 00:02:07.655 --rc geninfo_all_blocks=1 00:02:07.655 --no-external' 00:02:07.655 12:47:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:07.655 lcov: LCOV version 1.14 00:02:07.655 12:47:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:09.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:09.039 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:09.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:09.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:09.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:09.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:09.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:09.040 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:09.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:09.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:09.562 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:09.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:09.563 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:09.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:09.823 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:24.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:24.730 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:37.029 12:47:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:37.029 12:47:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:37.029 12:47:58 -- common/autotest_common.sh@10 -- # set +x 00:02:37.029 12:47:58 -- spdk/autotest.sh@91 -- # rm -f 00:02:37.029 12:47:58 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.232 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:41.232 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:41.232 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:41.232 12:48:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:41.232 12:48:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:41.232 12:48:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:41.232 12:48:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:41.232 12:48:02 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:41.232 12:48:02 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:41.232 12:48:02 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:41.232 12:48:02 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:41.232 12:48:02 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:41.232 12:48:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:41.232 12:48:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:41.232 12:48:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:41.232 12:48:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:41.232 12:48:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:41.232 12:48:02 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:41.232 No valid GPT data, bailing 00:02:41.232 12:48:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:41.232 12:48:02 -- scripts/common.sh@391 -- # pt= 00:02:41.232 12:48:02 -- scripts/common.sh@392 -- # return 1 00:02:41.232 12:48:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:41.232 1+0 records in 00:02:41.232 1+0 records out 00:02:41.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404218 s, 259 MB/s 00:02:41.232 12:48:02 -- spdk/autotest.sh@118 -- # sync 00:02:41.232 12:48:02 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:41.232 12:48:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:41.232 12:48:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.383 12:48:10 -- spdk/autotest.sh@124 -- # uname -s 00:02:49.383 12:48:10 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:49.383 12:48:10 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.383 12:48:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.383 12:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.383 12:48:10 -- common/autotest_common.sh@10 -- # set +x 00:02:49.383 ************************************ 00:02:49.383 START TEST setup.sh 00:02:49.383 ************************************ 00:02:49.383 12:48:10 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.383 * Looking for test storage... 00:02:49.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.383 12:48:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:49.383 12:48:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:49.383 12:48:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:49.383 12:48:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:49.383 12:48:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.383 12:48:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:49.383 ************************************ 00:02:49.383 START TEST acl 00:02:49.383 ************************************ 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:49.383 * Looking for test storage... 00:02:49.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:49.383 12:48:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:49.383 12:48:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:49.383 12:48:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.383 12:48:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.593 12:48:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.593 12:48:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:53.593 12:48:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.593 12:48:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:53.593 12:48:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.593 12:48:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.801 Hugepages 00:02:57.801 node hugesize free / total 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.801 12:48:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.801 00:02:57.802 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.802 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:57.802 12:48:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:57.802 12:48:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:57.802 12:48:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:57.802 12:48:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.802 12:48:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.802 12:48:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.802 ************************************ 00:02:57.802 START TEST denied 00:02:57.802 ************************************ 00:02:57.802 12:48:19 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:57.802 12:48:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:57.802 12:48:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:57.802 12:48:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:57.802 12:48:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.802 12:48:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.008 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.008 12:48:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.294 00:03:07.294 real 0m8.928s 00:03:07.294 user 0m2.932s 00:03:07.294 sys 0m5.340s 00:03:07.294 12:48:28 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.294 12:48:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:07.294 ************************************ 00:03:07.294 END TEST denied 00:03:07.294 ************************************ 00:03:07.294 12:48:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:07.294 12:48:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:07.294 12:48:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.294 12:48:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.294 12:48:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.294 ************************************ 00:03:07.294 START TEST allowed 00:03:07.294 ************************************ 00:03:07.294 12:48:28 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:07.294 12:48:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:07.294 12:48:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:07.294 12:48:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:07.295 12:48:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.295 12:48:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.662 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:12.662 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:12.662 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:12.662 12:48:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:12.662 12:48:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.662 12:48:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.857 00:03:16.857 real 0m9.806s 00:03:16.857 user 0m2.915s 00:03:16.857 sys 0m5.259s 00:03:16.857 12:48:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.857 12:48:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 END TEST allowed 00:03:16.857 ************************************ 00:03:16.857 12:48:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:16.857 00:03:16.857 real 0m26.970s 00:03:16.857 user 0m8.854s 00:03:16.857 sys 0m16.037s 00:03:16.857 12:48:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:16.857 12:48:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 END TEST acl 00:03:16.857 ************************************ 00:03:16.857 12:48:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:16.857 12:48:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:16.857 12:48:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.857 12:48:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.857 12:48:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:16.857 ************************************ 00:03:16.857 START TEST hugepages 00:03:16.857 ************************************ 00:03:16.857 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:16.857 * Looking for test storage... 00:03:16.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106442452 kB' 'MemAvailable: 110174080 kB' 'Buffers: 4832 kB' 'Cached: 10635396 kB' 'SwapCached: 0 kB' 'Active: 7567156 kB' 'Inactive: 3701932 kB' 'Active(anon): 7075724 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 632192 kB' 'Mapped: 193012 kB' 'Shmem: 6446864 kB' 'KReclaimable: 579208 kB' 'Slab: 1453160 kB' 'SReclaimable: 579208 kB' 'SUnreclaim: 873952 kB' 'KernelStack: 27728 kB' 'PageTables: 9332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8684012 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237660 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.857 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:16.858 12:48:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:16.858 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:16.858 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:16.858 12:48:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:16.858 ************************************ 00:03:16.858 START TEST default_setup 00:03:16.858 ************************************ 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.858 12:48:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.106 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:21.106 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108605000 kB' 'MemAvailable: 112336524 kB' 'Buffers: 4832 kB' 'Cached: 10635532 kB' 'SwapCached: 0 kB' 'Active: 7584980 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093548 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650588 kB' 'Mapped: 193276 kB' 'Shmem: 6447000 kB' 'KReclaimable: 579104 kB' 'Slab: 1450180 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871076 kB' 'KernelStack: 27760 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8702672 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237756 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.106 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108609644 kB' 'MemAvailable: 112341168 kB' 'Buffers: 4832 kB' 'Cached: 10635532 kB' 'SwapCached: 0 kB' 'Active: 7585864 kB' 'Inactive: 3701932 kB' 'Active(anon): 7094432 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651528 kB' 'Mapped: 193280 kB' 'Shmem: 6447000 kB' 'KReclaimable: 579104 kB' 'Slab: 1450180 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871076 kB' 'KernelStack: 27696 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8704300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237756 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.107 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.108 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108611212 kB' 'MemAvailable: 112342736 kB' 'Buffers: 4832 kB' 'Cached: 10635552 kB' 'SwapCached: 0 kB' 'Active: 7585052 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093620 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650276 kB' 'Mapped: 193264 kB' 'Shmem: 6447020 kB' 'KReclaimable: 579104 kB' 'Slab: 1450232 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871128 kB' 'KernelStack: 27808 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8702712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.109 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.110 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.111 nr_hugepages=1024 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.111 resv_hugepages=0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.111 surplus_hugepages=0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.111 anon_hugepages=0 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.111 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108612016 kB' 'MemAvailable: 112343540 kB' 'Buffers: 4832 kB' 'Cached: 10635576 kB' 'SwapCached: 0 kB' 'Active: 7584988 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093556 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650088 kB' 'Mapped: 193332 kB' 'Shmem: 6447044 kB' 'KReclaimable: 579104 kB' 'Slab: 1450232 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871128 kB' 'KernelStack: 27680 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8702736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.112 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59388884 kB' 'MemUsed: 6270124 kB' 'SwapCached: 0 kB' 'Active: 1478060 kB' 'Inactive: 288448 kB' 'Active(anon): 1320312 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1621444 kB' 'Mapped: 39492 kB' 'AnonPages: 148432 kB' 'Shmem: 1175248 kB' 'KernelStack: 14440 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 751876 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 426992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.113 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.114 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.115 node0=1024 expecting 1024 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.115 00:03:21.115 real 0m4.136s 00:03:21.115 user 0m1.581s 00:03:21.115 sys 0m2.542s 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:21.115 12:48:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:21.115 ************************************ 00:03:21.115 END TEST default_setup 00:03:21.115 ************************************ 00:03:21.115 12:48:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:21.115 12:48:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:21.115 12:48:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.115 12:48:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.115 12:48:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.115 ************************************ 00:03:21.115 START TEST per_node_1G_alloc 00:03:21.115 ************************************ 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.115 12:48:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.411 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.411 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.411 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.676 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.676 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108640408 kB' 'MemAvailable: 112371932 kB' 'Buffers: 4832 kB' 'Cached: 10635696 kB' 'SwapCached: 0 kB' 'Active: 7585396 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093964 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649652 kB' 'Mapped: 192196 kB' 'Shmem: 6447164 kB' 'KReclaimable: 579104 kB' 'Slab: 1450564 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871460 kB' 'KernelStack: 27792 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8689276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237676 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.677 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108641164 kB' 'MemAvailable: 112372688 kB' 'Buffers: 4832 kB' 'Cached: 10635700 kB' 'SwapCached: 0 kB' 'Active: 7584320 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092888 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649024 kB' 'Mapped: 192100 kB' 'Shmem: 6447168 kB' 'KReclaimable: 579104 kB' 'Slab: 1450564 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871460 kB' 'KernelStack: 27760 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8689296 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237660 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.678 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.679 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108641164 kB' 'MemAvailable: 112372688 kB' 'Buffers: 4832 kB' 'Cached: 10635716 kB' 'SwapCached: 0 kB' 'Active: 7584344 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092912 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649028 kB' 'Mapped: 192100 kB' 'Shmem: 6447184 kB' 'KReclaimable: 579104 kB' 'Slab: 1450564 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871460 kB' 'KernelStack: 27760 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8689316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237660 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.680 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.681 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.682 nr_hugepages=1024 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.682 resv_hugepages=0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.682 surplus_hugepages=0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.682 anon_hugepages=0 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108641104 kB' 'MemAvailable: 112372628 kB' 'Buffers: 4832 kB' 'Cached: 10635740 kB' 'SwapCached: 0 kB' 'Active: 7584368 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092936 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649028 kB' 'Mapped: 192100 kB' 'Shmem: 6447208 kB' 'KReclaimable: 579104 kB' 'Slab: 1450564 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871460 kB' 'KernelStack: 27760 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8689340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237660 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.682 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.683 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60462392 kB' 'MemUsed: 5196616 kB' 'SwapCached: 0 kB' 'Active: 1477504 kB' 'Inactive: 288448 kB' 'Active(anon): 1319756 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1621544 kB' 'Mapped: 38744 kB' 'AnonPages: 147624 kB' 'Shmem: 1175348 kB' 'KernelStack: 14280 kB' 'PageTables: 3272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752132 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.684 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.946 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.947 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48177984 kB' 'MemUsed: 12501856 kB' 'SwapCached: 0 kB' 'Active: 6107164 kB' 'Inactive: 3413484 kB' 'Active(anon): 5773480 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3413484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9019048 kB' 'Mapped: 153356 kB' 'AnonPages: 501680 kB' 'Shmem: 5271880 kB' 'KernelStack: 13480 kB' 'PageTables: 5884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254220 kB' 'Slab: 698432 kB' 'SReclaimable: 254220 kB' 'SUnreclaim: 444212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.948 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.949 node0=512 expecting 512 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.949 node1=512 expecting 512 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.949 00:03:24.949 real 0m3.998s 00:03:24.949 user 0m1.509s 00:03:24.949 sys 0m2.545s 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:24.949 12:48:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.949 ************************************ 00:03:24.949 END TEST per_node_1G_alloc 00:03:24.949 ************************************ 00:03:24.949 12:48:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.949 12:48:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.949 12:48:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.949 12:48:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.949 12:48:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.949 ************************************ 00:03:24.949 START TEST even_2G_alloc 00:03:24.949 ************************************ 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.949 12:48:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.161 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:29.161 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108626348 kB' 'MemAvailable: 112357872 kB' 'Buffers: 4832 kB' 'Cached: 10635876 kB' 'SwapCached: 0 kB' 'Active: 7587448 kB' 'Inactive: 3701932 kB' 'Active(anon): 7096016 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651796 kB' 'Mapped: 192216 kB' 'Shmem: 6447344 kB' 'KReclaimable: 579104 kB' 'Slab: 1450800 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871696 kB' 'KernelStack: 27696 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8690264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237628 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.161 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.162 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627132 kB' 'MemAvailable: 112358656 kB' 'Buffers: 4832 kB' 'Cached: 10635880 kB' 'SwapCached: 0 kB' 'Active: 7588076 kB' 'Inactive: 3701932 kB' 'Active(anon): 7096644 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 652452 kB' 'Mapped: 192720 kB' 'Shmem: 6447348 kB' 'KReclaimable: 579104 kB' 'Slab: 1450784 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871680 kB' 'KernelStack: 27696 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8691772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237596 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.163 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.164 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627216 kB' 'MemAvailable: 112358740 kB' 'Buffers: 4832 kB' 'Cached: 10635896 kB' 'SwapCached: 0 kB' 'Active: 7591524 kB' 'Inactive: 3701932 kB' 'Active(anon): 7100092 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 656808 kB' 'Mapped: 192608 kB' 'Shmem: 6447364 kB' 'KReclaimable: 579104 kB' 'Slab: 1450836 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871732 kB' 'KernelStack: 27728 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8696424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237584 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.165 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.166 nr_hugepages=1024 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.166 resv_hugepages=0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.166 surplus_hugepages=0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.166 anon_hugepages=0 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.166 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108632500 kB' 'MemAvailable: 112364024 kB' 'Buffers: 4832 kB' 'Cached: 10635920 kB' 'SwapCached: 0 kB' 'Active: 7586296 kB' 'Inactive: 3701932 kB' 'Active(anon): 7094864 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651072 kB' 'Mapped: 192104 kB' 'Shmem: 6447388 kB' 'KReclaimable: 579104 kB' 'Slab: 1450820 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871716 kB' 'KernelStack: 27728 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8690332 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237596 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.167 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60442228 kB' 'MemUsed: 5216780 kB' 'SwapCached: 0 kB' 'Active: 1477060 kB' 'Inactive: 288448 kB' 'Active(anon): 1319312 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1621636 kB' 'Mapped: 38748 kB' 'AnonPages: 147100 kB' 'Shmem: 1175440 kB' 'KernelStack: 14264 kB' 'PageTables: 3212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752140 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.168 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.169 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48192608 kB' 'MemUsed: 12487232 kB' 'SwapCached: 0 kB' 'Active: 6109408 kB' 'Inactive: 3413484 kB' 'Active(anon): 5775724 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3413484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9019160 kB' 'Mapped: 153356 kB' 'AnonPages: 504116 kB' 'Shmem: 5271992 kB' 'KernelStack: 13480 kB' 'PageTables: 5844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254220 kB' 'Slab: 698680 kB' 'SReclaimable: 254220 kB' 'SUnreclaim: 444460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.170 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.171 node0=512 expecting 512 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:29.171 node1=512 expecting 512 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:29.171 00:03:29.171 real 0m3.808s 00:03:29.171 user 0m1.520s 00:03:29.171 sys 0m2.335s 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.171 12:48:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.171 ************************************ 00:03:29.171 END TEST even_2G_alloc 00:03:29.171 ************************************ 00:03:29.171 12:48:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.171 12:48:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:29.171 12:48:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.171 12:48:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.171 12:48:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.171 ************************************ 00:03:29.171 START TEST odd_alloc 00:03:29.171 ************************************ 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.171 12:48:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.472 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:32.472 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108687768 kB' 'MemAvailable: 112419292 kB' 'Buffers: 4832 kB' 'Cached: 10636056 kB' 'SwapCached: 0 kB' 'Active: 7583388 kB' 'Inactive: 3701932 kB' 'Active(anon): 7091956 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647704 kB' 'Mapped: 192292 kB' 'Shmem: 6447524 kB' 'KReclaimable: 579104 kB' 'Slab: 1450872 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871768 kB' 'KernelStack: 27792 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8692196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237660 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.472 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108691908 kB' 'MemAvailable: 112423432 kB' 'Buffers: 4832 kB' 'Cached: 10636060 kB' 'SwapCached: 0 kB' 'Active: 7583804 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092372 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648048 kB' 'Mapped: 192284 kB' 'Shmem: 6447528 kB' 'KReclaimable: 579104 kB' 'Slab: 1450848 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871744 kB' 'KernelStack: 27696 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8692220 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237628 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.473 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.474 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.740 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108691708 kB' 'MemAvailable: 112423232 kB' 'Buffers: 4832 kB' 'Cached: 10636076 kB' 'SwapCached: 0 kB' 'Active: 7583140 kB' 'Inactive: 3701932 kB' 'Active(anon): 7091708 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647408 kB' 'Mapped: 192176 kB' 'Shmem: 6447544 kB' 'KReclaimable: 579104 kB' 'Slab: 1450972 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871868 kB' 'KernelStack: 27744 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8692368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.741 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.742 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:32.743 nr_hugepages=1025 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.743 resv_hugepages=0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.743 surplus_hugepages=0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.743 anon_hugepages=0 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108692092 kB' 'MemAvailable: 112423616 kB' 'Buffers: 4832 kB' 'Cached: 10636076 kB' 'SwapCached: 0 kB' 'Active: 7583460 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092028 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647664 kB' 'Mapped: 192176 kB' 'Shmem: 6447544 kB' 'KReclaimable: 579104 kB' 'Slab: 1450972 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871868 kB' 'KernelStack: 27760 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8694104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237772 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.743 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.744 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60487520 kB' 'MemUsed: 5171488 kB' 'SwapCached: 0 kB' 'Active: 1479072 kB' 'Inactive: 288448 kB' 'Active(anon): 1321324 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1621748 kB' 'Mapped: 38780 kB' 'AnonPages: 148832 kB' 'Shmem: 1175552 kB' 'KernelStack: 14488 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752276 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.745 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.746 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48207176 kB' 'MemUsed: 12472664 kB' 'SwapCached: 0 kB' 'Active: 6106052 kB' 'Inactive: 3413484 kB' 'Active(anon): 5772368 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3413484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9019216 kB' 'Mapped: 153396 kB' 'AnonPages: 500436 kB' 'Shmem: 5272048 kB' 'KernelStack: 13512 kB' 'PageTables: 6060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254220 kB' 'Slab: 698696 kB' 'SReclaimable: 254220 kB' 'SUnreclaim: 444476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.747 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.748 node0=512 expecting 513 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.748 node1=513 expecting 512 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.748 00:03:32.748 real 0m3.924s 00:03:32.748 user 0m1.577s 00:03:32.748 sys 0m2.409s 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.748 12:48:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.748 ************************************ 00:03:32.748 END TEST odd_alloc 00:03:32.748 ************************************ 00:03:32.748 12:48:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.748 12:48:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.748 12:48:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.748 12:48:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.748 12:48:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.748 ************************************ 00:03:32.748 START TEST custom_alloc 00:03:32.748 ************************************ 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.748 12:48:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.961 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:36.961 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:36.961 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:36.962 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:36.962 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:36.962 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:36.962 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107607364 kB' 'MemAvailable: 111338888 kB' 'Buffers: 4832 kB' 'Cached: 10636244 kB' 'SwapCached: 0 kB' 'Active: 7585480 kB' 'Inactive: 3701932 kB' 'Active(anon): 7094048 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648976 kB' 'Mapped: 192168 kB' 'Shmem: 6447712 kB' 'KReclaimable: 579104 kB' 'Slab: 1451184 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 872080 kB' 'KernelStack: 27680 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8695376 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237820 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.962 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107607528 kB' 'MemAvailable: 111339052 kB' 'Buffers: 4832 kB' 'Cached: 10636248 kB' 'SwapCached: 0 kB' 'Active: 7584732 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093300 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648304 kB' 'Mapped: 192236 kB' 'Shmem: 6447716 kB' 'KReclaimable: 579104 kB' 'Slab: 1451300 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 872196 kB' 'KernelStack: 27712 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8695396 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237836 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.963 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.964 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107609036 kB' 'MemAvailable: 111340560 kB' 'Buffers: 4832 kB' 'Cached: 10636260 kB' 'SwapCached: 0 kB' 'Active: 7583792 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092360 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647852 kB' 'Mapped: 192160 kB' 'Shmem: 6447728 kB' 'KReclaimable: 579104 kB' 'Slab: 1451284 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 872180 kB' 'KernelStack: 27648 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8692344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.965 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.966 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:36.967 nr_hugepages=1536 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.967 resv_hugepages=0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.967 surplus_hugepages=0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.967 anon_hugepages=0 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107608560 kB' 'MemAvailable: 111340084 kB' 'Buffers: 4832 kB' 'Cached: 10636288 kB' 'SwapCached: 0 kB' 'Active: 7583920 kB' 'Inactive: 3701932 kB' 'Active(anon): 7092488 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647964 kB' 'Mapped: 192156 kB' 'Shmem: 6447756 kB' 'KReclaimable: 579104 kB' 'Slab: 1451284 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 872180 kB' 'KernelStack: 27664 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8692364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.967 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.968 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60454204 kB' 'MemUsed: 5204804 kB' 'SwapCached: 0 kB' 'Active: 1479032 kB' 'Inactive: 288448 kB' 'Active(anon): 1321284 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1621896 kB' 'Mapped: 38796 kB' 'AnonPages: 148760 kB' 'Shmem: 1175700 kB' 'KernelStack: 14248 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752456 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.969 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.970 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47154860 kB' 'MemUsed: 13524980 kB' 'SwapCached: 0 kB' 'Active: 6105408 kB' 'Inactive: 3413484 kB' 'Active(anon): 5771724 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3413484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9019244 kB' 'Mapped: 153360 kB' 'AnonPages: 499736 kB' 'Shmem: 5272076 kB' 'KernelStack: 13448 kB' 'PageTables: 5736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254220 kB' 'Slab: 698828 kB' 'SReclaimable: 254220 kB' 'SUnreclaim: 444608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.971 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.972 node0=512 expecting 512 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:36.972 node1=1024 expecting 1024 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:36.972 00:03:36.972 real 0m3.983s 00:03:36.972 user 0m1.594s 00:03:36.972 sys 0m2.455s 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.972 12:48:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.972 ************************************ 00:03:36.972 END TEST custom_alloc 00:03:36.972 ************************************ 00:03:36.972 12:48:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:36.972 12:48:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.972 12:48:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.972 12:48:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.972 12:48:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.972 ************************************ 00:03:36.972 START TEST no_shrink_alloc 00:03:36.972 ************************************ 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.972 12:48:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.185 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.185 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108671544 kB' 'MemAvailable: 112403068 kB' 'Buffers: 4832 kB' 'Cached: 10636420 kB' 'SwapCached: 0 kB' 'Active: 7586272 kB' 'Inactive: 3701932 kB' 'Active(anon): 7094840 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649856 kB' 'Mapped: 192568 kB' 'Shmem: 6447888 kB' 'KReclaimable: 579104 kB' 'Slab: 1451088 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871984 kB' 'KernelStack: 27696 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237676 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.185 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108672276 kB' 'MemAvailable: 112403800 kB' 'Buffers: 4832 kB' 'Cached: 10636424 kB' 'SwapCached: 0 kB' 'Active: 7585204 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093772 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649216 kB' 'Mapped: 192184 kB' 'Shmem: 6447892 kB' 'KReclaimable: 579104 kB' 'Slab: 1451080 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871976 kB' 'KernelStack: 27680 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693596 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237644 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.186 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.187 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108672968 kB' 'MemAvailable: 112404492 kB' 'Buffers: 4832 kB' 'Cached: 10636440 kB' 'SwapCached: 0 kB' 'Active: 7585164 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649128 kB' 'Mapped: 192184 kB' 'Shmem: 6447908 kB' 'KReclaimable: 579104 kB' 'Slab: 1451080 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871976 kB' 'KernelStack: 27664 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693620 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237644 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.188 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.189 nr_hugepages=1024 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.189 resv_hugepages=0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.189 surplus_hugepages=0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.189 anon_hugepages=0 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108672716 kB' 'MemAvailable: 112404240 kB' 'Buffers: 4832 kB' 'Cached: 10636460 kB' 'SwapCached: 0 kB' 'Active: 7585260 kB' 'Inactive: 3701932 kB' 'Active(anon): 7093828 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649224 kB' 'Mapped: 192184 kB' 'Shmem: 6447928 kB' 'KReclaimable: 579104 kB' 'Slab: 1451080 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871976 kB' 'KernelStack: 27680 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8693644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237644 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.189 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59441696 kB' 'MemUsed: 6217312 kB' 'SwapCached: 0 kB' 'Active: 1477372 kB' 'Inactive: 288448 kB' 'Active(anon): 1319624 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1622060 kB' 'Mapped: 38812 kB' 'AnonPages: 146936 kB' 'Shmem: 1175864 kB' 'KernelStack: 14216 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752476 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.190 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:41.191 node0=1024 expecting 1024 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.191 12:49:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.488 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.488 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.488 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.488 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.488 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:44.489 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.489 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108673944 kB' 'MemAvailable: 112405468 kB' 'Buffers: 4832 kB' 'Cached: 10636592 kB' 'SwapCached: 0 kB' 'Active: 7587744 kB' 'Inactive: 3701932 kB' 'Active(anon): 7096312 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651148 kB' 'Mapped: 192380 kB' 'Shmem: 6448060 kB' 'KReclaimable: 579104 kB' 'Slab: 1451008 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871904 kB' 'KernelStack: 27696 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8696020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237676 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.489 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108676980 kB' 'MemAvailable: 112408504 kB' 'Buffers: 4832 kB' 'Cached: 10636592 kB' 'SwapCached: 0 kB' 'Active: 7587692 kB' 'Inactive: 3701932 kB' 'Active(anon): 7096260 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651032 kB' 'Mapped: 192276 kB' 'Shmem: 6448060 kB' 'KReclaimable: 579104 kB' 'Slab: 1451008 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871904 kB' 'KernelStack: 27664 kB' 'PageTables: 8968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8697520 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237708 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.761 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108678372 kB' 'MemAvailable: 112409896 kB' 'Buffers: 4832 kB' 'Cached: 10636612 kB' 'SwapCached: 0 kB' 'Active: 7586592 kB' 'Inactive: 3701932 kB' 'Active(anon): 7095160 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650312 kB' 'Mapped: 192192 kB' 'Shmem: 6448080 kB' 'KReclaimable: 579104 kB' 'Slab: 1450992 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871888 kB' 'KernelStack: 27760 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8696432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.762 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.763 nr_hugepages=1024 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.763 resv_hugepages=0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.763 surplus_hugepages=0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.763 anon_hugepages=0 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.763 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108678824 kB' 'MemAvailable: 112410348 kB' 'Buffers: 4832 kB' 'Cached: 10636632 kB' 'SwapCached: 0 kB' 'Active: 7586760 kB' 'Inactive: 3701932 kB' 'Active(anon): 7095328 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701932 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650468 kB' 'Mapped: 192192 kB' 'Shmem: 6448100 kB' 'KReclaimable: 579104 kB' 'Slab: 1450992 kB' 'SReclaimable: 579104 kB' 'SUnreclaim: 871888 kB' 'KernelStack: 27680 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8697812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237756 kB' 'VmallocChunk: 0 kB' 'Percpu: 143424 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4093300 kB' 'DirectMap2M: 57452544 kB' 'DirectMap1G: 74448896 kB' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.764 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59450548 kB' 'MemUsed: 6208460 kB' 'SwapCached: 0 kB' 'Active: 1480576 kB' 'Inactive: 288448 kB' 'Active(anon): 1322828 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1622188 kB' 'Mapped: 38828 kB' 'AnonPages: 150020 kB' 'Shmem: 1175992 kB' 'KernelStack: 14248 kB' 'PageTables: 3288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 752300 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 427416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.765 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.766 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.767 node0=1024 expecting 1024 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.767 00:03:44.767 real 0m7.831s 00:03:44.767 user 0m3.034s 00:03:44.767 sys 0m4.919s 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.767 12:49:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.767 ************************************ 00:03:44.767 END TEST no_shrink_alloc 00:03:44.767 ************************************ 00:03:44.767 12:49:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.767 12:49:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.767 00:03:44.767 real 0m28.312s 00:03:44.767 user 0m11.056s 00:03:44.767 sys 0m17.625s 00:03:44.767 12:49:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.767 12:49:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.767 ************************************ 00:03:44.767 END TEST hugepages 00:03:44.767 ************************************ 00:03:44.767 12:49:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:44.767 12:49:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:44.767 12:49:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.767 12:49:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.767 12:49:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.767 ************************************ 00:03:44.767 START TEST driver 00:03:44.767 ************************************ 00:03:44.767 12:49:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:45.029 * Looking for test storage... 00:03:45.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.029 12:49:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:45.029 12:49:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.029 12:49:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.318 12:49:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:50.318 12:49:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.318 12:49:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.318 12:49:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:50.318 ************************************ 00:03:50.318 START TEST guess_driver 00:03:50.318 ************************************ 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:50.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:50.318 Looking for driver=vfio-pci 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.318 12:49:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.615 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.616 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.880 12:49:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.183 00:03:59.183 real 0m9.050s 00:03:59.183 user 0m3.043s 00:03:59.183 sys 0m5.259s 00:03:59.183 12:49:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.183 12:49:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.183 ************************************ 00:03:59.183 END TEST guess_driver 00:03:59.183 ************************************ 00:03:59.183 12:49:20 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:59.183 00:03:59.183 real 0m14.057s 00:03:59.183 user 0m4.432s 00:03:59.183 sys 0m8.076s 00:03:59.183 12:49:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.183 12:49:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:59.183 ************************************ 00:03:59.183 END TEST driver 00:03:59.183 ************************************ 00:03:59.183 12:49:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:59.183 12:49:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:59.183 12:49:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.183 12:49:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.183 12:49:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.183 ************************************ 00:03:59.183 START TEST devices 00:03:59.183 ************************************ 00:03:59.183 12:49:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:59.183 * Looking for test storage... 00:03:59.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.183 12:49:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:59.183 12:49:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:59.183 12:49:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.183 12:49:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:03.387 12:49:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:03.387 No valid GPT data, bailing 00:04:03.387 12:49:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:03.387 12:49:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:03.387 12:49:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:03.387 12:49:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.387 12:49:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.387 ************************************ 00:04:03.387 START TEST nvme_mount 00:04:03.387 ************************************ 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.387 12:49:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:04.329 Creating new GPT entries in memory. 00:04:04.329 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.329 other utilities. 00:04:04.329 12:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.329 12:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.329 12:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.329 12:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.329 12:49:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.270 Creating new GPT entries in memory. 00:04:05.270 The operation has completed successfully. 00:04:05.270 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.270 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.270 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 435008 00:04:05.270 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.271 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:05.271 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.271 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:05.271 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.531 12:49:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.738 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.738 12:49:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.738 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:09.738 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:09.738 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.738 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:09.738 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:09.738 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:09.738 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.738 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:09.738 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.739 12:49:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.039 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.300 12:49:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.300 12:49:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.503 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.504 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.504 00:04:17.504 real 0m13.937s 00:04:17.504 user 0m4.344s 00:04:17.504 sys 0m7.514s 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.504 12:49:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:17.504 ************************************ 00:04:17.504 END TEST nvme_mount 00:04:17.504 ************************************ 00:04:17.504 12:49:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:17.504 12:49:38 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:17.504 12:49:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.504 12:49:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.504 12:49:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:17.504 ************************************ 00:04:17.504 START TEST dm_mount 00:04:17.504 ************************************ 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.504 12:49:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:18.447 Creating new GPT entries in memory. 00:04:18.447 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.447 other utilities. 00:04:18.447 12:49:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.447 12:49:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.447 12:49:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.447 12:49:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.447 12:49:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:19.423 Creating new GPT entries in memory. 00:04:19.423 The operation has completed successfully. 00:04:19.423 12:49:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:19.423 12:49:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.423 12:49:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.423 12:49:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.423 12:49:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:20.366 The operation has completed successfully. 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 440587 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:20.366 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:20.626 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.626 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.627 12:49:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.843 12:49:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.843 12:49:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.147 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:28.148 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.148 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.148 12:49:49 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:28.148 00:04:28.148 real 0m10.879s 00:04:28.148 user 0m2.938s 00:04:28.148 sys 0m5.017s 00:04:28.148 12:49:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.148 12:49:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.148 ************************************ 00:04:28.148 END TEST dm_mount 00:04:28.148 ************************************ 00:04:28.148 12:49:49 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.148 12:49:49 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:28.408 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:28.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.408 12:49:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:28.408 12:49:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.670 12:49:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.670 12:49:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.670 12:49:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.670 12:49:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.670 12:49:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:28.670 00:04:28.670 real 0m29.575s 00:04:28.670 user 0m8.959s 00:04:28.670 sys 0m15.487s 00:04:28.670 12:49:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.670 12:49:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.670 ************************************ 00:04:28.670 END TEST devices 00:04:28.670 ************************************ 00:04:28.670 12:49:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:28.670 00:04:28.670 real 1m39.328s 00:04:28.670 user 0m33.439s 00:04:28.670 sys 0m57.523s 00:04:28.670 12:49:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.670 12:49:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:28.670 ************************************ 00:04:28.670 END TEST setup.sh 00:04:28.670 ************************************ 00:04:28.670 12:49:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.670 12:49:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:32.877 Hugepages 00:04:32.877 node hugesize free / total 00:04:32.877 node0 1048576kB 0 / 0 00:04:32.877 node0 2048kB 2048 / 2048 00:04:32.877 node1 1048576kB 0 / 0 00:04:32.877 node1 2048kB 0 / 0 00:04:32.877 00:04:32.877 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.877 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:32.877 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:32.877 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:32.877 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:32.877 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:32.877 12:49:54 -- spdk/autotest.sh@130 -- # uname -s 00:04:32.877 12:49:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:32.877 12:49:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:32.877 12:49:54 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.173 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.173 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.434 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:38.348 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:38.348 12:49:59 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:39.293 12:50:00 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:39.293 12:50:00 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:39.293 12:50:00 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:39.293 12:50:00 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:39.293 12:50:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:39.293 12:50:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:39.293 12:50:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:39.293 12:50:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:39.293 12:50:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:39.293 12:50:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:39.293 12:50:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:39.293 12:50:01 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.608 Waiting for block devices as requested 00:04:43.608 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:43.608 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:43.869 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:43.869 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:43.869 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:44.130 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:44.130 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:44.130 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:44.130 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:44.392 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:44.392 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:44.392 12:50:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:44.392 12:50:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:04:44.392 12:50:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:44.392 12:50:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:44.392 12:50:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:44.392 12:50:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:44.392 12:50:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:04:44.392 12:50:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:44.392 12:50:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:44.392 12:50:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:44.392 12:50:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:44.392 12:50:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:44.392 12:50:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:44.392 12:50:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:44.392 12:50:06 -- common/autotest_common.sh@1557 -- # continue 00:04:44.392 12:50:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:44.392 12:50:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.392 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:04:44.392 12:50:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:44.392 12:50:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.392 12:50:06 -- common/autotest_common.sh@10 -- # set +x 00:04:44.392 12:50:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.621 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:48.621 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:48.621 12:50:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:48.621 12:50:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.621 12:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 12:50:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:48.621 12:50:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:48.621 12:50:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.621 12:50:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:48.621 12:50:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:48.621 12:50:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:48.621 12:50:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.621 12:50:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.621 12:50:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.621 12:50:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.621 12:50:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.621 12:50:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:48.621 12:50:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:04:48.621 12:50:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:48.621 12:50:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:48.621 12:50:09 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:04:48.621 12:50:09 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:48.621 12:50:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:48.621 12:50:09 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:48.621 12:50:09 -- common/autotest_common.sh@1593 -- # return 0 00:04:48.621 12:50:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:48.621 12:50:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:48.621 12:50:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:48.621 12:50:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:48.621 12:50:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:48.621 12:50:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:48.621 12:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 12:50:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:48.621 12:50:09 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.621 12:50:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.621 12:50:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.621 12:50:09 -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 ************************************ 00:04:48.621 START TEST env 00:04:48.621 ************************************ 00:04:48.621 12:50:09 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.621 * Looking for test storage... 00:04:48.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.621 12:50:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.621 12:50:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.621 12:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.621 12:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 ************************************ 00:04:48.621 START TEST env_memory 00:04:48.621 ************************************ 00:04:48.621 12:50:10 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.621 00:04:48.621 00:04:48.621 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.621 http://cunit.sourceforge.net/ 00:04:48.621 00:04:48.621 00:04:48.621 Suite: memory 00:04:48.621 Test: alloc and free memory map ...[2024-07-15 12:50:10.197216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.621 passed 00:04:48.621 Test: mem map translation ...[2024-07-15 12:50:10.222922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.621 [2024-07-15 12:50:10.222962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.621 [2024-07-15 12:50:10.223012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.621 [2024-07-15 12:50:10.223021] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.621 passed 00:04:48.621 Test: mem map registration ...[2024-07-15 12:50:10.278433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:48.621 [2024-07-15 12:50:10.278459] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:48.621 passed 00:04:48.621 Test: mem map adjacent registrations ...passed 00:04:48.621 00:04:48.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.621 suites 1 1 n/a 0 0 00:04:48.621 tests 4 4 4 0 0 00:04:48.621 asserts 152 152 152 0 n/a 00:04:48.621 00:04:48.621 Elapsed time = 0.195 seconds 00:04:48.621 00:04:48.621 real 0m0.210s 00:04:48.621 user 0m0.195s 00:04:48.621 sys 0m0.014s 00:04:48.621 12:50:10 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.621 12:50:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 ************************************ 00:04:48.621 END TEST env_memory 00:04:48.621 ************************************ 00:04:48.621 12:50:10 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.621 12:50:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.621 12:50:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.621 12:50:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.621 12:50:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.621 ************************************ 00:04:48.621 START TEST env_vtophys 00:04:48.621 ************************************ 00:04:48.621 12:50:10 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.884 EAL: lib.eal log level changed from notice to debug 00:04:48.884 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.884 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.884 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.884 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.884 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.884 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.884 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.884 EAL: Detected lcore 7 as core 7 on socket 0 00:04:48.884 EAL: Detected lcore 8 as core 8 on socket 0 00:04:48.884 EAL: Detected lcore 9 as core 9 on socket 0 00:04:48.884 EAL: Detected lcore 10 as core 10 on socket 0 00:04:48.884 EAL: Detected lcore 11 as core 11 on socket 0 00:04:48.884 EAL: Detected lcore 12 as core 12 on socket 0 00:04:48.884 EAL: Detected lcore 13 as core 13 on socket 0 00:04:48.884 EAL: Detected lcore 14 as core 14 on socket 0 00:04:48.884 EAL: Detected lcore 15 as core 15 on socket 0 00:04:48.884 EAL: Detected lcore 16 as core 16 on socket 0 00:04:48.884 EAL: Detected lcore 17 as core 17 on socket 0 00:04:48.884 EAL: Detected lcore 18 as core 18 on socket 0 00:04:48.884 EAL: Detected lcore 19 as core 19 on socket 0 00:04:48.884 EAL: Detected lcore 20 as core 20 on socket 0 00:04:48.884 EAL: Detected lcore 21 as core 21 on socket 0 00:04:48.884 EAL: Detected lcore 22 as core 22 on socket 0 00:04:48.884 EAL: Detected lcore 23 as core 23 on socket 0 00:04:48.884 EAL: Detected lcore 24 as core 24 on socket 0 00:04:48.884 EAL: Detected lcore 25 as core 25 on socket 0 00:04:48.884 EAL: Detected lcore 26 as core 26 on socket 0 00:04:48.884 EAL: Detected lcore 27 as core 27 on socket 0 00:04:48.884 EAL: Detected lcore 28 as core 28 on socket 0 00:04:48.884 EAL: Detected lcore 29 as core 29 on socket 0 00:04:48.884 EAL: Detected lcore 30 as core 30 on socket 0 00:04:48.884 EAL: Detected lcore 31 as core 31 on socket 0 00:04:48.884 EAL: Detected lcore 32 as core 32 on socket 0 00:04:48.884 EAL: Detected lcore 33 as core 33 on socket 0 00:04:48.884 EAL: Detected lcore 34 as core 34 on socket 0 00:04:48.884 EAL: Detected lcore 35 as core 35 on socket 0 00:04:48.884 EAL: Detected lcore 36 as core 0 on socket 1 00:04:48.884 EAL: Detected lcore 37 as core 1 on socket 1 00:04:48.884 EAL: Detected lcore 38 as core 2 on socket 1 00:04:48.884 EAL: Detected lcore 39 as core 3 on socket 1 00:04:48.884 EAL: Detected lcore 40 as core 4 on socket 1 00:04:48.884 EAL: Detected lcore 41 as core 5 on socket 1 00:04:48.884 EAL: Detected lcore 42 as core 6 on socket 1 00:04:48.884 EAL: Detected lcore 43 as core 7 on socket 1 00:04:48.884 EAL: Detected lcore 44 as core 8 on socket 1 00:04:48.884 EAL: Detected lcore 45 as core 9 on socket 1 00:04:48.884 EAL: Detected lcore 46 as core 10 on socket 1 00:04:48.884 EAL: Detected lcore 47 as core 11 on socket 1 00:04:48.884 EAL: Detected lcore 48 as core 12 on socket 1 00:04:48.884 EAL: Detected lcore 49 as core 13 on socket 1 00:04:48.884 EAL: Detected lcore 50 as core 14 on socket 1 00:04:48.884 EAL: Detected lcore 51 as core 15 on socket 1 00:04:48.884 EAL: Detected lcore 52 as core 16 on socket 1 00:04:48.884 EAL: Detected lcore 53 as core 17 on socket 1 00:04:48.884 EAL: Detected lcore 54 as core 18 on socket 1 00:04:48.884 EAL: Detected lcore 55 as core 19 on socket 1 00:04:48.884 EAL: Detected lcore 56 as core 20 on socket 1 00:04:48.884 EAL: Detected lcore 57 as core 21 on socket 1 00:04:48.884 EAL: Detected lcore 58 as core 22 on socket 1 00:04:48.884 EAL: Detected lcore 59 as core 23 on socket 1 00:04:48.884 EAL: Detected lcore 60 as core 24 on socket 1 00:04:48.884 EAL: Detected lcore 61 as core 25 on socket 1 00:04:48.884 EAL: Detected lcore 62 as core 26 on socket 1 00:04:48.884 EAL: Detected lcore 63 as core 27 on socket 1 00:04:48.884 EAL: Detected lcore 64 as core 28 on socket 1 00:04:48.884 EAL: Detected lcore 65 as core 29 on socket 1 00:04:48.884 EAL: Detected lcore 66 as core 30 on socket 1 00:04:48.884 EAL: Detected lcore 67 as core 31 on socket 1 00:04:48.884 EAL: Detected lcore 68 as core 32 on socket 1 00:04:48.884 EAL: Detected lcore 69 as core 33 on socket 1 00:04:48.884 EAL: Detected lcore 70 as core 34 on socket 1 00:04:48.884 EAL: Detected lcore 71 as core 35 on socket 1 00:04:48.884 EAL: Detected lcore 72 as core 0 on socket 0 00:04:48.884 EAL: Detected lcore 73 as core 1 on socket 0 00:04:48.884 EAL: Detected lcore 74 as core 2 on socket 0 00:04:48.884 EAL: Detected lcore 75 as core 3 on socket 0 00:04:48.884 EAL: Detected lcore 76 as core 4 on socket 0 00:04:48.884 EAL: Detected lcore 77 as core 5 on socket 0 00:04:48.884 EAL: Detected lcore 78 as core 6 on socket 0 00:04:48.884 EAL: Detected lcore 79 as core 7 on socket 0 00:04:48.884 EAL: Detected lcore 80 as core 8 on socket 0 00:04:48.884 EAL: Detected lcore 81 as core 9 on socket 0 00:04:48.884 EAL: Detected lcore 82 as core 10 on socket 0 00:04:48.884 EAL: Detected lcore 83 as core 11 on socket 0 00:04:48.884 EAL: Detected lcore 84 as core 12 on socket 0 00:04:48.884 EAL: Detected lcore 85 as core 13 on socket 0 00:04:48.884 EAL: Detected lcore 86 as core 14 on socket 0 00:04:48.884 EAL: Detected lcore 87 as core 15 on socket 0 00:04:48.884 EAL: Detected lcore 88 as core 16 on socket 0 00:04:48.884 EAL: Detected lcore 89 as core 17 on socket 0 00:04:48.884 EAL: Detected lcore 90 as core 18 on socket 0 00:04:48.884 EAL: Detected lcore 91 as core 19 on socket 0 00:04:48.884 EAL: Detected lcore 92 as core 20 on socket 0 00:04:48.884 EAL: Detected lcore 93 as core 21 on socket 0 00:04:48.884 EAL: Detected lcore 94 as core 22 on socket 0 00:04:48.884 EAL: Detected lcore 95 as core 23 on socket 0 00:04:48.884 EAL: Detected lcore 96 as core 24 on socket 0 00:04:48.884 EAL: Detected lcore 97 as core 25 on socket 0 00:04:48.884 EAL: Detected lcore 98 as core 26 on socket 0 00:04:48.884 EAL: Detected lcore 99 as core 27 on socket 0 00:04:48.884 EAL: Detected lcore 100 as core 28 on socket 0 00:04:48.884 EAL: Detected lcore 101 as core 29 on socket 0 00:04:48.884 EAL: Detected lcore 102 as core 30 on socket 0 00:04:48.884 EAL: Detected lcore 103 as core 31 on socket 0 00:04:48.884 EAL: Detected lcore 104 as core 32 on socket 0 00:04:48.884 EAL: Detected lcore 105 as core 33 on socket 0 00:04:48.884 EAL: Detected lcore 106 as core 34 on socket 0 00:04:48.884 EAL: Detected lcore 107 as core 35 on socket 0 00:04:48.884 EAL: Detected lcore 108 as core 0 on socket 1 00:04:48.884 EAL: Detected lcore 109 as core 1 on socket 1 00:04:48.884 EAL: Detected lcore 110 as core 2 on socket 1 00:04:48.884 EAL: Detected lcore 111 as core 3 on socket 1 00:04:48.884 EAL: Detected lcore 112 as core 4 on socket 1 00:04:48.884 EAL: Detected lcore 113 as core 5 on socket 1 00:04:48.884 EAL: Detected lcore 114 as core 6 on socket 1 00:04:48.884 EAL: Detected lcore 115 as core 7 on socket 1 00:04:48.884 EAL: Detected lcore 116 as core 8 on socket 1 00:04:48.884 EAL: Detected lcore 117 as core 9 on socket 1 00:04:48.884 EAL: Detected lcore 118 as core 10 on socket 1 00:04:48.884 EAL: Detected lcore 119 as core 11 on socket 1 00:04:48.884 EAL: Detected lcore 120 as core 12 on socket 1 00:04:48.884 EAL: Detected lcore 121 as core 13 on socket 1 00:04:48.884 EAL: Detected lcore 122 as core 14 on socket 1 00:04:48.884 EAL: Detected lcore 123 as core 15 on socket 1 00:04:48.884 EAL: Detected lcore 124 as core 16 on socket 1 00:04:48.884 EAL: Detected lcore 125 as core 17 on socket 1 00:04:48.884 EAL: Detected lcore 126 as core 18 on socket 1 00:04:48.884 EAL: Detected lcore 127 as core 19 on socket 1 00:04:48.884 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:48.884 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:48.884 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:48.884 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:48.884 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:48.884 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:48.884 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:48.884 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:48.884 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:48.884 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:48.884 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:48.884 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:48.884 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:48.884 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:48.884 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:48.884 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:48.884 EAL: Maximum logical cores by configuration: 128 00:04:48.884 EAL: Detected CPU lcores: 128 00:04:48.884 EAL: Detected NUMA nodes: 2 00:04:48.884 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:48.884 EAL: Detected shared linkage of DPDK 00:04:48.884 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.884 EAL: Bus pci wants IOVA as 'DC' 00:04:48.884 EAL: Buses did not request a specific IOVA mode. 00:04:48.884 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.884 EAL: Selected IOVA mode 'VA' 00:04:48.884 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.884 EAL: Probing VFIO support... 00:04:48.884 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.884 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.884 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.884 EAL: VFIO support initialized 00:04:48.884 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.884 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.884 EAL: Setting up physically contiguous memory... 00:04:48.884 EAL: Setting maximum number of open files to 524288 00:04:48.884 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.884 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.884 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.884 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.884 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.884 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.884 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.884 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.884 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.884 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.884 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.884 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.884 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.884 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.884 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.884 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.885 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.885 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.885 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.885 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.885 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.885 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.885 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.885 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.885 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.885 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.885 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.885 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.885 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.885 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.885 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.885 EAL: Hugepages will be freed exactly as allocated. 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: TSC frequency is ~2400000 KHz 00:04:48.885 EAL: Main lcore 0 is ready (tid=7f1caad78a00;cpuset=[0]) 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 0 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:48.885 EAL: Mem event callback 'spdk:(nil)' registered 00:04:48.885 00:04:48.885 00:04:48.885 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.885 http://cunit.sourceforge.net/ 00:04:48.885 00:04:48.885 00:04:48.885 Suite: components_suite 00:04:48.885 Test: vtophys_malloc_test ...passed 00:04:48.885 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.885 EAL: Restoring previous memory policy: 4 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.885 EAL: request: mp_malloc_sync 00:04:48.885 EAL: No shared files mode enabled, IPC is disabled 00:04:48.885 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.885 EAL: Trying to obtain current memory policy. 00:04:48.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.146 EAL: Restoring previous memory policy: 4 00:04:49.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.146 EAL: request: mp_malloc_sync 00:04:49.146 EAL: No shared files mode enabled, IPC is disabled 00:04:49.146 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.146 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.146 EAL: request: mp_malloc_sync 00:04:49.146 EAL: No shared files mode enabled, IPC is disabled 00:04:49.146 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.146 EAL: Trying to obtain current memory policy. 00:04:49.146 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.407 EAL: Restoring previous memory policy: 4 00:04:49.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.407 EAL: request: mp_malloc_sync 00:04:49.407 EAL: No shared files mode enabled, IPC is disabled 00:04:49.407 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.407 EAL: request: mp_malloc_sync 00:04:49.407 EAL: No shared files mode enabled, IPC is disabled 00:04:49.407 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.407 passed 00:04:49.407 00:04:49.407 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.407 suites 1 1 n/a 0 0 00:04:49.407 tests 2 2 2 0 0 00:04:49.407 asserts 497 497 497 0 n/a 00:04:49.407 00:04:49.407 Elapsed time = 0.647 seconds 00:04:49.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.407 EAL: request: mp_malloc_sync 00:04:49.407 EAL: No shared files mode enabled, IPC is disabled 00:04:49.407 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.407 EAL: No shared files mode enabled, IPC is disabled 00:04:49.408 EAL: No shared files mode enabled, IPC is disabled 00:04:49.408 EAL: No shared files mode enabled, IPC is disabled 00:04:49.408 00:04:49.408 real 0m0.785s 00:04:49.408 user 0m0.404s 00:04:49.408 sys 0m0.352s 00:04:49.408 12:50:11 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.408 12:50:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:49.408 ************************************ 00:04:49.408 END TEST env_vtophys 00:04:49.408 ************************************ 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.669 12:50:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.669 12:50:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.669 ************************************ 00:04:49.669 START TEST env_pci 00:04:49.669 ************************************ 00:04:49.669 12:50:11 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.669 00:04:49.669 00:04:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.669 http://cunit.sourceforge.net/ 00:04:49.669 00:04:49.669 00:04:49.669 Suite: pci 00:04:49.669 Test: pci_hook ...[2024-07-15 12:50:11.315842] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 452986 has claimed it 00:04:49.669 EAL: Cannot find device (10000:00:01.0) 00:04:49.669 EAL: Failed to attach device on primary process 00:04:49.669 passed 00:04:49.669 00:04:49.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.669 suites 1 1 n/a 0 0 00:04:49.669 tests 1 1 1 0 0 00:04:49.669 asserts 25 25 25 0 n/a 00:04:49.669 00:04:49.669 Elapsed time = 0.033 seconds 00:04:49.669 00:04:49.669 real 0m0.054s 00:04:49.669 user 0m0.017s 00:04:49.669 sys 0m0.037s 00:04:49.669 12:50:11 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.669 12:50:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:49.669 ************************************ 00:04:49.669 END TEST env_pci 00:04:49.669 ************************************ 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1142 -- # return 0 00:04:49.669 12:50:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.669 12:50:11 env -- env/env.sh@15 -- # uname 00:04:49.669 12:50:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.669 12:50:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.669 12:50:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:49.669 12:50:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.669 12:50:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.669 ************************************ 00:04:49.669 START TEST env_dpdk_post_init 00:04:49.669 ************************************ 00:04:49.669 12:50:11 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.669 EAL: Detected CPU lcores: 128 00:04:49.669 EAL: Detected NUMA nodes: 2 00:04:49.669 EAL: Detected shared linkage of DPDK 00:04:49.669 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.669 EAL: Selected IOVA mode 'VA' 00:04:49.669 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.669 EAL: VFIO support initialized 00:04:49.669 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.930 EAL: Using IOMMU type 1 (Type 1) 00:04:49.930 EAL: Ignore mapping IO port bar(1) 00:04:50.190 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:50.190 EAL: Ignore mapping IO port bar(1) 00:04:50.190 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:50.452 EAL: Ignore mapping IO port bar(1) 00:04:50.452 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:50.712 EAL: Ignore mapping IO port bar(1) 00:04:50.712 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:50.972 EAL: Ignore mapping IO port bar(1) 00:04:50.972 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:51.231 EAL: Ignore mapping IO port bar(1) 00:04:51.231 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:51.231 EAL: Ignore mapping IO port bar(1) 00:04:51.492 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:51.492 EAL: Ignore mapping IO port bar(1) 00:04:51.751 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:51.751 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:52.011 EAL: Ignore mapping IO port bar(1) 00:04:52.011 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:52.271 EAL: Ignore mapping IO port bar(1) 00:04:52.271 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:52.532 EAL: Ignore mapping IO port bar(1) 00:04:52.532 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:52.791 EAL: Ignore mapping IO port bar(1) 00:04:52.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:52.792 EAL: Ignore mapping IO port bar(1) 00:04:53.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:53.052 EAL: Ignore mapping IO port bar(1) 00:04:53.311 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:53.311 EAL: Ignore mapping IO port bar(1) 00:04:53.572 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:53.572 EAL: Ignore mapping IO port bar(1) 00:04:53.572 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:53.572 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:53.572 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:53.833 Starting DPDK initialization... 00:04:53.833 Starting SPDK post initialization... 00:04:53.833 SPDK NVMe probe 00:04:53.833 Attaching to 0000:65:00.0 00:04:53.833 Attached to 0000:65:00.0 00:04:53.833 Cleaning up... 00:04:55.750 00:04:55.750 real 0m5.725s 00:04:55.750 user 0m0.183s 00:04:55.750 sys 0m0.086s 00:04:55.750 12:50:17 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.750 12:50:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.750 ************************************ 00:04:55.750 END TEST env_dpdk_post_init 00:04:55.750 ************************************ 00:04:55.750 12:50:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.750 12:50:17 env -- env/env.sh@26 -- # uname 00:04:55.750 12:50:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.750 12:50:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.750 12:50:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.750 12:50:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.750 12:50:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.750 ************************************ 00:04:55.750 START TEST env_mem_callbacks 00:04:55.750 ************************************ 00:04:55.750 12:50:17 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.750 EAL: Detected CPU lcores: 128 00:04:55.750 EAL: Detected NUMA nodes: 2 00:04:55.750 EAL: Detected shared linkage of DPDK 00:04:55.750 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.750 EAL: Selected IOVA mode 'VA' 00:04:55.750 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.750 EAL: VFIO support initialized 00:04:55.750 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.750 00:04:55.750 00:04:55.750 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.750 http://cunit.sourceforge.net/ 00:04:55.750 00:04:55.750 00:04:55.750 Suite: memory 00:04:55.750 Test: test ... 00:04:55.750 register 0x200000200000 2097152 00:04:55.750 malloc 3145728 00:04:55.750 register 0x200000400000 4194304 00:04:55.750 buf 0x200000500000 len 3145728 PASSED 00:04:55.750 malloc 64 00:04:55.750 buf 0x2000004fff40 len 64 PASSED 00:04:55.750 malloc 4194304 00:04:55.750 register 0x200000800000 6291456 00:04:55.750 buf 0x200000a00000 len 4194304 PASSED 00:04:55.750 free 0x200000500000 3145728 00:04:55.750 free 0x2000004fff40 64 00:04:55.750 unregister 0x200000400000 4194304 PASSED 00:04:55.750 free 0x200000a00000 4194304 00:04:55.750 unregister 0x200000800000 6291456 PASSED 00:04:55.750 malloc 8388608 00:04:55.750 register 0x200000400000 10485760 00:04:55.750 buf 0x200000600000 len 8388608 PASSED 00:04:55.750 free 0x200000600000 8388608 00:04:55.750 unregister 0x200000400000 10485760 PASSED 00:04:55.750 passed 00:04:55.750 00:04:55.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.750 suites 1 1 n/a 0 0 00:04:55.750 tests 1 1 1 0 0 00:04:55.750 asserts 15 15 15 0 n/a 00:04:55.750 00:04:55.750 Elapsed time = 0.004 seconds 00:04:55.750 00:04:55.750 real 0m0.059s 00:04:55.750 user 0m0.019s 00:04:55.750 sys 0m0.040s 00:04:55.750 12:50:17 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.750 12:50:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.750 ************************************ 00:04:55.750 END TEST env_mem_callbacks 00:04:55.750 ************************************ 00:04:55.750 12:50:17 env -- common/autotest_common.sh@1142 -- # return 0 00:04:55.750 00:04:55.750 real 0m7.342s 00:04:55.750 user 0m1.002s 00:04:55.750 sys 0m0.881s 00:04:55.750 12:50:17 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.750 12:50:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.750 ************************************ 00:04:55.750 END TEST env 00:04:55.750 ************************************ 00:04:55.750 12:50:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:55.750 12:50:17 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.750 12:50:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.751 12:50:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.751 12:50:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.751 ************************************ 00:04:55.751 START TEST rpc 00:04:55.751 ************************************ 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.751 * Looking for test storage... 00:04:55.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.751 12:50:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=454367 00:04:55.751 12:50:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.751 12:50:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.751 12:50:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 454367 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@829 -- # '[' -z 454367 ']' 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.751 12:50:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.013 [2024-07-15 12:50:17.581995] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:04:56.013 [2024-07-15 12:50:17.582081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454367 ] 00:04:56.013 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.013 [2024-07-15 12:50:17.653807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.013 [2024-07-15 12:50:17.728258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.013 [2024-07-15 12:50:17.728299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 454367' to capture a snapshot of events at runtime. 00:04:56.013 [2024-07-15 12:50:17.728306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.013 [2024-07-15 12:50:17.728312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.013 [2024-07-15 12:50:17.728318] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid454367 for offline analysis/debug. 00:04:56.013 [2024-07-15 12:50:17.728338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.585 12:50:18 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.585 12:50:18 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:56.585 12:50:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.585 12:50:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:56.585 12:50:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.585 12:50:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.585 12:50:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.585 12:50:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.585 12:50:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.585 ************************************ 00:04:56.585 START TEST rpc_integrity 00:04:56.585 ************************************ 00:04:56.585 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:56.585 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.585 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.585 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.585 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.585 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.585 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.846 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.846 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.846 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.846 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.846 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.846 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.846 { 00:04:56.846 "name": "Malloc0", 00:04:56.846 "aliases": [ 00:04:56.846 "d3547f0d-dfff-43ab-a298-6bf9bea20eb3" 00:04:56.846 ], 00:04:56.846 "product_name": "Malloc disk", 00:04:56.846 "block_size": 512, 00:04:56.846 "num_blocks": 16384, 00:04:56.846 "uuid": "d3547f0d-dfff-43ab-a298-6bf9bea20eb3", 00:04:56.846 "assigned_rate_limits": { 00:04:56.846 "rw_ios_per_sec": 0, 00:04:56.846 "rw_mbytes_per_sec": 0, 00:04:56.846 "r_mbytes_per_sec": 0, 00:04:56.846 "w_mbytes_per_sec": 0 00:04:56.846 }, 00:04:56.846 "claimed": false, 00:04:56.846 "zoned": false, 00:04:56.846 "supported_io_types": { 00:04:56.846 "read": true, 00:04:56.846 "write": true, 00:04:56.846 "unmap": true, 00:04:56.846 "flush": true, 00:04:56.846 "reset": true, 00:04:56.846 "nvme_admin": false, 00:04:56.846 "nvme_io": false, 00:04:56.846 "nvme_io_md": false, 00:04:56.846 "write_zeroes": true, 00:04:56.846 "zcopy": true, 00:04:56.846 "get_zone_info": false, 00:04:56.846 "zone_management": false, 00:04:56.846 "zone_append": false, 00:04:56.846 "compare": false, 00:04:56.846 "compare_and_write": false, 00:04:56.846 "abort": true, 00:04:56.846 "seek_hole": false, 00:04:56.846 "seek_data": false, 00:04:56.846 "copy": true, 00:04:56.846 "nvme_iov_md": false 00:04:56.846 }, 00:04:56.846 "memory_domains": [ 00:04:56.846 { 00:04:56.846 "dma_device_id": "system", 00:04:56.846 "dma_device_type": 1 00:04:56.846 }, 00:04:56.847 { 00:04:56.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.847 "dma_device_type": 2 00:04:56.847 } 00:04:56.847 ], 00:04:56.847 "driver_specific": {} 00:04:56.847 } 00:04:56.847 ]' 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 [2024-07-15 12:50:18.505127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.847 [2024-07-15 12:50:18.505158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.847 [2024-07-15 12:50:18.505171] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x171ba10 00:04:56.847 [2024-07-15 12:50:18.505178] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.847 [2024-07-15 12:50:18.506494] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.847 [2024-07-15 12:50:18.506514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.847 Passthru0 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.847 { 00:04:56.847 "name": "Malloc0", 00:04:56.847 "aliases": [ 00:04:56.847 "d3547f0d-dfff-43ab-a298-6bf9bea20eb3" 00:04:56.847 ], 00:04:56.847 "product_name": "Malloc disk", 00:04:56.847 "block_size": 512, 00:04:56.847 "num_blocks": 16384, 00:04:56.847 "uuid": "d3547f0d-dfff-43ab-a298-6bf9bea20eb3", 00:04:56.847 "assigned_rate_limits": { 00:04:56.847 "rw_ios_per_sec": 0, 00:04:56.847 "rw_mbytes_per_sec": 0, 00:04:56.847 "r_mbytes_per_sec": 0, 00:04:56.847 "w_mbytes_per_sec": 0 00:04:56.847 }, 00:04:56.847 "claimed": true, 00:04:56.847 "claim_type": "exclusive_write", 00:04:56.847 "zoned": false, 00:04:56.847 "supported_io_types": { 00:04:56.847 "read": true, 00:04:56.847 "write": true, 00:04:56.847 "unmap": true, 00:04:56.847 "flush": true, 00:04:56.847 "reset": true, 00:04:56.847 "nvme_admin": false, 00:04:56.847 "nvme_io": false, 00:04:56.847 "nvme_io_md": false, 00:04:56.847 "write_zeroes": true, 00:04:56.847 "zcopy": true, 00:04:56.847 "get_zone_info": false, 00:04:56.847 "zone_management": false, 00:04:56.847 "zone_append": false, 00:04:56.847 "compare": false, 00:04:56.847 "compare_and_write": false, 00:04:56.847 "abort": true, 00:04:56.847 "seek_hole": false, 00:04:56.847 "seek_data": false, 00:04:56.847 "copy": true, 00:04:56.847 "nvme_iov_md": false 00:04:56.847 }, 00:04:56.847 "memory_domains": [ 00:04:56.847 { 00:04:56.847 "dma_device_id": "system", 00:04:56.847 "dma_device_type": 1 00:04:56.847 }, 00:04:56.847 { 00:04:56.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.847 "dma_device_type": 2 00:04:56.847 } 00:04:56.847 ], 00:04:56.847 "driver_specific": {} 00:04:56.847 }, 00:04:56.847 { 00:04:56.847 "name": "Passthru0", 00:04:56.847 "aliases": [ 00:04:56.847 "a606a9d4-0490-5c55-83b8-9850649af030" 00:04:56.847 ], 00:04:56.847 "product_name": "passthru", 00:04:56.847 "block_size": 512, 00:04:56.847 "num_blocks": 16384, 00:04:56.847 "uuid": "a606a9d4-0490-5c55-83b8-9850649af030", 00:04:56.847 "assigned_rate_limits": { 00:04:56.847 "rw_ios_per_sec": 0, 00:04:56.847 "rw_mbytes_per_sec": 0, 00:04:56.847 "r_mbytes_per_sec": 0, 00:04:56.847 "w_mbytes_per_sec": 0 00:04:56.847 }, 00:04:56.847 "claimed": false, 00:04:56.847 "zoned": false, 00:04:56.847 "supported_io_types": { 00:04:56.847 "read": true, 00:04:56.847 "write": true, 00:04:56.847 "unmap": true, 00:04:56.847 "flush": true, 00:04:56.847 "reset": true, 00:04:56.847 "nvme_admin": false, 00:04:56.847 "nvme_io": false, 00:04:56.847 "nvme_io_md": false, 00:04:56.847 "write_zeroes": true, 00:04:56.847 "zcopy": true, 00:04:56.847 "get_zone_info": false, 00:04:56.847 "zone_management": false, 00:04:56.847 "zone_append": false, 00:04:56.847 "compare": false, 00:04:56.847 "compare_and_write": false, 00:04:56.847 "abort": true, 00:04:56.847 "seek_hole": false, 00:04:56.847 "seek_data": false, 00:04:56.847 "copy": true, 00:04:56.847 "nvme_iov_md": false 00:04:56.847 }, 00:04:56.847 "memory_domains": [ 00:04:56.847 { 00:04:56.847 "dma_device_id": "system", 00:04:56.847 "dma_device_type": 1 00:04:56.847 }, 00:04:56.847 { 00:04:56.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.847 "dma_device_type": 2 00:04:56.847 } 00:04:56.847 ], 00:04:56.847 "driver_specific": { 00:04:56.847 "passthru": { 00:04:56.847 "name": "Passthru0", 00:04:56.847 "base_bdev_name": "Malloc0" 00:04:56.847 } 00:04:56.847 } 00:04:56.847 } 00:04:56.847 ]' 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.847 12:50:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.847 00:04:56.847 real 0m0.295s 00:04:56.847 user 0m0.186s 00:04:56.847 sys 0m0.043s 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.847 12:50:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.847 ************************************ 00:04:56.847 END TEST rpc_integrity 00:04:56.847 ************************************ 00:04:57.108 12:50:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.108 12:50:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.108 12:50:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.108 12:50:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.108 12:50:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.108 ************************************ 00:04:57.108 START TEST rpc_plugins 00:04:57.108 ************************************ 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:57.108 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.108 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.108 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.108 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.108 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.108 { 00:04:57.109 "name": "Malloc1", 00:04:57.109 "aliases": [ 00:04:57.109 "29f26a7a-a58d-449e-93d3-feef41866da3" 00:04:57.109 ], 00:04:57.109 "product_name": "Malloc disk", 00:04:57.109 "block_size": 4096, 00:04:57.109 "num_blocks": 256, 00:04:57.109 "uuid": "29f26a7a-a58d-449e-93d3-feef41866da3", 00:04:57.109 "assigned_rate_limits": { 00:04:57.109 "rw_ios_per_sec": 0, 00:04:57.109 "rw_mbytes_per_sec": 0, 00:04:57.109 "r_mbytes_per_sec": 0, 00:04:57.109 "w_mbytes_per_sec": 0 00:04:57.109 }, 00:04:57.109 "claimed": false, 00:04:57.109 "zoned": false, 00:04:57.109 "supported_io_types": { 00:04:57.109 "read": true, 00:04:57.109 "write": true, 00:04:57.109 "unmap": true, 00:04:57.109 "flush": true, 00:04:57.109 "reset": true, 00:04:57.109 "nvme_admin": false, 00:04:57.109 "nvme_io": false, 00:04:57.109 "nvme_io_md": false, 00:04:57.109 "write_zeroes": true, 00:04:57.109 "zcopy": true, 00:04:57.109 "get_zone_info": false, 00:04:57.109 "zone_management": false, 00:04:57.109 "zone_append": false, 00:04:57.109 "compare": false, 00:04:57.109 "compare_and_write": false, 00:04:57.109 "abort": true, 00:04:57.109 "seek_hole": false, 00:04:57.109 "seek_data": false, 00:04:57.109 "copy": true, 00:04:57.109 "nvme_iov_md": false 00:04:57.109 }, 00:04:57.109 "memory_domains": [ 00:04:57.109 { 00:04:57.109 "dma_device_id": "system", 00:04:57.109 "dma_device_type": 1 00:04:57.109 }, 00:04:57.109 { 00:04:57.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.109 "dma_device_type": 2 00:04:57.109 } 00:04:57.109 ], 00:04:57.109 "driver_specific": {} 00:04:57.109 } 00:04:57.109 ]' 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.109 12:50:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.109 00:04:57.109 real 0m0.149s 00:04:57.109 user 0m0.096s 00:04:57.109 sys 0m0.017s 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.109 12:50:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.109 ************************************ 00:04:57.109 END TEST rpc_plugins 00:04:57.109 ************************************ 00:04:57.109 12:50:18 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.109 12:50:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.109 12:50:18 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.109 12:50:18 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.109 12:50:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.369 ************************************ 00:04:57.370 START TEST rpc_trace_cmd_test 00:04:57.370 ************************************ 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.370 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid454367", 00:04:57.370 "tpoint_group_mask": "0x8", 00:04:57.370 "iscsi_conn": { 00:04:57.370 "mask": "0x2", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "scsi": { 00:04:57.370 "mask": "0x4", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "bdev": { 00:04:57.370 "mask": "0x8", 00:04:57.370 "tpoint_mask": "0xffffffffffffffff" 00:04:57.370 }, 00:04:57.370 "nvmf_rdma": { 00:04:57.370 "mask": "0x10", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "nvmf_tcp": { 00:04:57.370 "mask": "0x20", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "ftl": { 00:04:57.370 "mask": "0x40", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "blobfs": { 00:04:57.370 "mask": "0x80", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "dsa": { 00:04:57.370 "mask": "0x200", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "thread": { 00:04:57.370 "mask": "0x400", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "nvme_pcie": { 00:04:57.370 "mask": "0x800", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "iaa": { 00:04:57.370 "mask": "0x1000", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "nvme_tcp": { 00:04:57.370 "mask": "0x2000", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "bdev_nvme": { 00:04:57.370 "mask": "0x4000", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 }, 00:04:57.370 "sock": { 00:04:57.370 "mask": "0x8000", 00:04:57.370 "tpoint_mask": "0x0" 00:04:57.370 } 00:04:57.370 }' 00:04:57.370 12:50:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.370 00:04:57.370 real 0m0.226s 00:04:57.370 user 0m0.191s 00:04:57.370 sys 0m0.028s 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.370 12:50:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.370 ************************************ 00:04:57.370 END TEST rpc_trace_cmd_test 00:04:57.370 ************************************ 00:04:57.630 12:50:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.630 12:50:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.630 12:50:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.630 12:50:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.630 12:50:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.630 12:50:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.630 12:50:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.630 ************************************ 00:04:57.630 START TEST rpc_daemon_integrity 00:04:57.630 ************************************ 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.630 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.631 { 00:04:57.631 "name": "Malloc2", 00:04:57.631 "aliases": [ 00:04:57.631 "9f025df2-246d-40b6-943b-6e4ce0e57e46" 00:04:57.631 ], 00:04:57.631 "product_name": "Malloc disk", 00:04:57.631 "block_size": 512, 00:04:57.631 "num_blocks": 16384, 00:04:57.631 "uuid": "9f025df2-246d-40b6-943b-6e4ce0e57e46", 00:04:57.631 "assigned_rate_limits": { 00:04:57.631 "rw_ios_per_sec": 0, 00:04:57.631 "rw_mbytes_per_sec": 0, 00:04:57.631 "r_mbytes_per_sec": 0, 00:04:57.631 "w_mbytes_per_sec": 0 00:04:57.631 }, 00:04:57.631 "claimed": false, 00:04:57.631 "zoned": false, 00:04:57.631 "supported_io_types": { 00:04:57.631 "read": true, 00:04:57.631 "write": true, 00:04:57.631 "unmap": true, 00:04:57.631 "flush": true, 00:04:57.631 "reset": true, 00:04:57.631 "nvme_admin": false, 00:04:57.631 "nvme_io": false, 00:04:57.631 "nvme_io_md": false, 00:04:57.631 "write_zeroes": true, 00:04:57.631 "zcopy": true, 00:04:57.631 "get_zone_info": false, 00:04:57.631 "zone_management": false, 00:04:57.631 "zone_append": false, 00:04:57.631 "compare": false, 00:04:57.631 "compare_and_write": false, 00:04:57.631 "abort": true, 00:04:57.631 "seek_hole": false, 00:04:57.631 "seek_data": false, 00:04:57.631 "copy": true, 00:04:57.631 "nvme_iov_md": false 00:04:57.631 }, 00:04:57.631 "memory_domains": [ 00:04:57.631 { 00:04:57.631 "dma_device_id": "system", 00:04:57.631 "dma_device_type": 1 00:04:57.631 }, 00:04:57.631 { 00:04:57.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.631 "dma_device_type": 2 00:04:57.631 } 00:04:57.631 ], 00:04:57.631 "driver_specific": {} 00:04:57.631 } 00:04:57.631 ]' 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.631 [2024-07-15 12:50:19.399559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.631 [2024-07-15 12:50:19.399589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.631 [2024-07-15 12:50:19.399602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18b2fe0 00:04:57.631 [2024-07-15 12:50:19.399609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.631 [2024-07-15 12:50:19.400819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.631 [2024-07-15 12:50:19.400838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.631 Passthru0 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.631 { 00:04:57.631 "name": "Malloc2", 00:04:57.631 "aliases": [ 00:04:57.631 "9f025df2-246d-40b6-943b-6e4ce0e57e46" 00:04:57.631 ], 00:04:57.631 "product_name": "Malloc disk", 00:04:57.631 "block_size": 512, 00:04:57.631 "num_blocks": 16384, 00:04:57.631 "uuid": "9f025df2-246d-40b6-943b-6e4ce0e57e46", 00:04:57.631 "assigned_rate_limits": { 00:04:57.631 "rw_ios_per_sec": 0, 00:04:57.631 "rw_mbytes_per_sec": 0, 00:04:57.631 "r_mbytes_per_sec": 0, 00:04:57.631 "w_mbytes_per_sec": 0 00:04:57.631 }, 00:04:57.631 "claimed": true, 00:04:57.631 "claim_type": "exclusive_write", 00:04:57.631 "zoned": false, 00:04:57.631 "supported_io_types": { 00:04:57.631 "read": true, 00:04:57.631 "write": true, 00:04:57.631 "unmap": true, 00:04:57.631 "flush": true, 00:04:57.631 "reset": true, 00:04:57.631 "nvme_admin": false, 00:04:57.631 "nvme_io": false, 00:04:57.631 "nvme_io_md": false, 00:04:57.631 "write_zeroes": true, 00:04:57.631 "zcopy": true, 00:04:57.631 "get_zone_info": false, 00:04:57.631 "zone_management": false, 00:04:57.631 "zone_append": false, 00:04:57.631 "compare": false, 00:04:57.631 "compare_and_write": false, 00:04:57.631 "abort": true, 00:04:57.631 "seek_hole": false, 00:04:57.631 "seek_data": false, 00:04:57.631 "copy": true, 00:04:57.631 "nvme_iov_md": false 00:04:57.631 }, 00:04:57.631 "memory_domains": [ 00:04:57.631 { 00:04:57.631 "dma_device_id": "system", 00:04:57.631 "dma_device_type": 1 00:04:57.631 }, 00:04:57.631 { 00:04:57.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.631 "dma_device_type": 2 00:04:57.631 } 00:04:57.631 ], 00:04:57.631 "driver_specific": {} 00:04:57.631 }, 00:04:57.631 { 00:04:57.631 "name": "Passthru0", 00:04:57.631 "aliases": [ 00:04:57.631 "fcc2bc06-cb01-52cb-a117-c8e32437f6f5" 00:04:57.631 ], 00:04:57.631 "product_name": "passthru", 00:04:57.631 "block_size": 512, 00:04:57.631 "num_blocks": 16384, 00:04:57.631 "uuid": "fcc2bc06-cb01-52cb-a117-c8e32437f6f5", 00:04:57.631 "assigned_rate_limits": { 00:04:57.631 "rw_ios_per_sec": 0, 00:04:57.631 "rw_mbytes_per_sec": 0, 00:04:57.631 "r_mbytes_per_sec": 0, 00:04:57.631 "w_mbytes_per_sec": 0 00:04:57.631 }, 00:04:57.631 "claimed": false, 00:04:57.631 "zoned": false, 00:04:57.631 "supported_io_types": { 00:04:57.631 "read": true, 00:04:57.631 "write": true, 00:04:57.631 "unmap": true, 00:04:57.631 "flush": true, 00:04:57.631 "reset": true, 00:04:57.631 "nvme_admin": false, 00:04:57.631 "nvme_io": false, 00:04:57.631 "nvme_io_md": false, 00:04:57.631 "write_zeroes": true, 00:04:57.631 "zcopy": true, 00:04:57.631 "get_zone_info": false, 00:04:57.631 "zone_management": false, 00:04:57.631 "zone_append": false, 00:04:57.631 "compare": false, 00:04:57.631 "compare_and_write": false, 00:04:57.631 "abort": true, 00:04:57.631 "seek_hole": false, 00:04:57.631 "seek_data": false, 00:04:57.631 "copy": true, 00:04:57.631 "nvme_iov_md": false 00:04:57.631 }, 00:04:57.631 "memory_domains": [ 00:04:57.631 { 00:04:57.631 "dma_device_id": "system", 00:04:57.631 "dma_device_type": 1 00:04:57.631 }, 00:04:57.631 { 00:04:57.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.631 "dma_device_type": 2 00:04:57.631 } 00:04:57.631 ], 00:04:57.631 "driver_specific": { 00:04:57.631 "passthru": { 00:04:57.631 "name": "Passthru0", 00:04:57.631 "base_bdev_name": "Malloc2" 00:04:57.631 } 00:04:57.631 } 00:04:57.631 } 00:04:57.631 ]' 00:04:57.631 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.892 00:04:57.892 real 0m0.294s 00:04:57.892 user 0m0.196s 00:04:57.892 sys 0m0.035s 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.892 12:50:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 ************************************ 00:04:57.892 END TEST rpc_daemon_integrity 00:04:57.892 ************************************ 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:57.892 12:50:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.892 12:50:19 rpc -- rpc/rpc.sh@84 -- # killprocess 454367 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@948 -- # '[' -z 454367 ']' 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@952 -- # kill -0 454367 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@953 -- # uname 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454367 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454367' 00:04:57.892 killing process with pid 454367 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@967 -- # kill 454367 00:04:57.892 12:50:19 rpc -- common/autotest_common.sh@972 -- # wait 454367 00:04:58.152 00:04:58.152 real 0m2.434s 00:04:58.152 user 0m3.170s 00:04:58.152 sys 0m0.705s 00:04:58.152 12:50:19 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.152 12:50:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.152 ************************************ 00:04:58.152 END TEST rpc 00:04:58.152 ************************************ 00:04:58.152 12:50:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.152 12:50:19 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.152 12:50:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.152 12:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.152 12:50:19 -- common/autotest_common.sh@10 -- # set +x 00:04:58.152 ************************************ 00:04:58.152 START TEST skip_rpc 00:04:58.152 ************************************ 00:04:58.152 12:50:19 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.413 * Looking for test storage... 00:04:58.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:58.413 12:50:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.413 12:50:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:58.413 12:50:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:58.413 12:50:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.413 12:50:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.413 12:50:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.413 ************************************ 00:04:58.413 START TEST skip_rpc 00:04:58.413 ************************************ 00:04:58.413 12:50:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:58.413 12:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=454956 00:04:58.413 12:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.413 12:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:58.413 12:50:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:58.413 [2024-07-15 12:50:20.123199] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:04:58.413 [2024-07-15 12:50:20.123276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid454956 ] 00:04:58.413 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.413 [2024-07-15 12:50:20.196854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.673 [2024-07-15 12:50:20.270088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 454956 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 454956 ']' 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 454956 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 454956 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 454956' 00:05:03.959 killing process with pid 454956 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 454956 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 454956 00:05:03.959 00:05:03.959 real 0m5.278s 00:05:03.959 user 0m5.069s 00:05:03.959 sys 0m0.240s 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.959 12:50:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.959 ************************************ 00:05:03.959 END TEST skip_rpc 00:05:03.959 ************************************ 00:05:03.959 12:50:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:03.959 12:50:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:03.959 12:50:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.959 12:50:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.959 12:50:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.959 ************************************ 00:05:03.959 START TEST skip_rpc_with_json 00:05:03.959 ************************************ 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=456017 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 456017 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 456017 ']' 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.959 12:50:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.959 [2024-07-15 12:50:25.469360] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:03.959 [2024-07-15 12:50:25.469413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456017 ] 00:05:03.959 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.959 [2024-07-15 12:50:25.537704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.959 [2024-07-15 12:50:25.609115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.530 [2024-07-15 12:50:26.229950] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:04.530 request: 00:05:04.530 { 00:05:04.530 "trtype": "tcp", 00:05:04.530 "method": "nvmf_get_transports", 00:05:04.530 "req_id": 1 00:05:04.530 } 00:05:04.530 Got JSON-RPC error response 00:05:04.530 response: 00:05:04.530 { 00:05:04.530 "code": -19, 00:05:04.530 "message": "No such device" 00:05:04.530 } 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.530 [2024-07-15 12:50:26.242069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:04.530 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.790 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:04.790 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:04.790 { 00:05:04.790 "subsystems": [ 00:05:04.790 { 00:05:04.790 "subsystem": "vfio_user_target", 00:05:04.790 "config": null 00:05:04.790 }, 00:05:04.790 { 00:05:04.790 "subsystem": "keyring", 00:05:04.790 "config": [] 00:05:04.790 }, 00:05:04.790 { 00:05:04.790 "subsystem": "iobuf", 00:05:04.790 "config": [ 00:05:04.790 { 00:05:04.790 "method": "iobuf_set_options", 00:05:04.790 "params": { 00:05:04.790 "small_pool_count": 8192, 00:05:04.790 "large_pool_count": 1024, 00:05:04.790 "small_bufsize": 8192, 00:05:04.790 "large_bufsize": 135168 00:05:04.790 } 00:05:04.790 } 00:05:04.790 ] 00:05:04.790 }, 00:05:04.790 { 00:05:04.790 "subsystem": "sock", 00:05:04.790 "config": [ 00:05:04.790 { 00:05:04.790 "method": "sock_set_default_impl", 00:05:04.790 "params": { 00:05:04.790 "impl_name": "posix" 00:05:04.790 } 00:05:04.790 }, 00:05:04.790 { 00:05:04.790 "method": "sock_impl_set_options", 00:05:04.790 "params": { 00:05:04.790 "impl_name": "ssl", 00:05:04.790 "recv_buf_size": 4096, 00:05:04.790 "send_buf_size": 4096, 00:05:04.790 "enable_recv_pipe": true, 00:05:04.790 "enable_quickack": false, 00:05:04.790 "enable_placement_id": 0, 00:05:04.791 "enable_zerocopy_send_server": true, 00:05:04.791 "enable_zerocopy_send_client": false, 00:05:04.791 "zerocopy_threshold": 0, 00:05:04.791 "tls_version": 0, 00:05:04.791 "enable_ktls": false 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "sock_impl_set_options", 00:05:04.791 "params": { 00:05:04.791 "impl_name": "posix", 00:05:04.791 "recv_buf_size": 2097152, 00:05:04.791 "send_buf_size": 2097152, 00:05:04.791 "enable_recv_pipe": true, 00:05:04.791 "enable_quickack": false, 00:05:04.791 "enable_placement_id": 0, 00:05:04.791 "enable_zerocopy_send_server": true, 00:05:04.791 "enable_zerocopy_send_client": false, 00:05:04.791 "zerocopy_threshold": 0, 00:05:04.791 "tls_version": 0, 00:05:04.791 "enable_ktls": false 00:05:04.791 } 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "vmd", 00:05:04.791 "config": [] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "accel", 00:05:04.791 "config": [ 00:05:04.791 { 00:05:04.791 "method": "accel_set_options", 00:05:04.791 "params": { 00:05:04.791 "small_cache_size": 128, 00:05:04.791 "large_cache_size": 16, 00:05:04.791 "task_count": 2048, 00:05:04.791 "sequence_count": 2048, 00:05:04.791 "buf_count": 2048 00:05:04.791 } 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "bdev", 00:05:04.791 "config": [ 00:05:04.791 { 00:05:04.791 "method": "bdev_set_options", 00:05:04.791 "params": { 00:05:04.791 "bdev_io_pool_size": 65535, 00:05:04.791 "bdev_io_cache_size": 256, 00:05:04.791 "bdev_auto_examine": true, 00:05:04.791 "iobuf_small_cache_size": 128, 00:05:04.791 "iobuf_large_cache_size": 16 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "bdev_raid_set_options", 00:05:04.791 "params": { 00:05:04.791 "process_window_size_kb": 1024 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "bdev_iscsi_set_options", 00:05:04.791 "params": { 00:05:04.791 "timeout_sec": 30 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "bdev_nvme_set_options", 00:05:04.791 "params": { 00:05:04.791 "action_on_timeout": "none", 00:05:04.791 "timeout_us": 0, 00:05:04.791 "timeout_admin_us": 0, 00:05:04.791 "keep_alive_timeout_ms": 10000, 00:05:04.791 "arbitration_burst": 0, 00:05:04.791 "low_priority_weight": 0, 00:05:04.791 "medium_priority_weight": 0, 00:05:04.791 "high_priority_weight": 0, 00:05:04.791 "nvme_adminq_poll_period_us": 10000, 00:05:04.791 "nvme_ioq_poll_period_us": 0, 00:05:04.791 "io_queue_requests": 0, 00:05:04.791 "delay_cmd_submit": true, 00:05:04.791 "transport_retry_count": 4, 00:05:04.791 "bdev_retry_count": 3, 00:05:04.791 "transport_ack_timeout": 0, 00:05:04.791 "ctrlr_loss_timeout_sec": 0, 00:05:04.791 "reconnect_delay_sec": 0, 00:05:04.791 "fast_io_fail_timeout_sec": 0, 00:05:04.791 "disable_auto_failback": false, 00:05:04.791 "generate_uuids": false, 00:05:04.791 "transport_tos": 0, 00:05:04.791 "nvme_error_stat": false, 00:05:04.791 "rdma_srq_size": 0, 00:05:04.791 "io_path_stat": false, 00:05:04.791 "allow_accel_sequence": false, 00:05:04.791 "rdma_max_cq_size": 0, 00:05:04.791 "rdma_cm_event_timeout_ms": 0, 00:05:04.791 "dhchap_digests": [ 00:05:04.791 "sha256", 00:05:04.791 "sha384", 00:05:04.791 "sha512" 00:05:04.791 ], 00:05:04.791 "dhchap_dhgroups": [ 00:05:04.791 "null", 00:05:04.791 "ffdhe2048", 00:05:04.791 "ffdhe3072", 00:05:04.791 "ffdhe4096", 00:05:04.791 "ffdhe6144", 00:05:04.791 "ffdhe8192" 00:05:04.791 ] 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "bdev_nvme_set_hotplug", 00:05:04.791 "params": { 00:05:04.791 "period_us": 100000, 00:05:04.791 "enable": false 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "bdev_wait_for_examine" 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "scsi", 00:05:04.791 "config": null 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "scheduler", 00:05:04.791 "config": [ 00:05:04.791 { 00:05:04.791 "method": "framework_set_scheduler", 00:05:04.791 "params": { 00:05:04.791 "name": "static" 00:05:04.791 } 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "vhost_scsi", 00:05:04.791 "config": [] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "vhost_blk", 00:05:04.791 "config": [] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "ublk", 00:05:04.791 "config": [] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "nbd", 00:05:04.791 "config": [] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "nvmf", 00:05:04.791 "config": [ 00:05:04.791 { 00:05:04.791 "method": "nvmf_set_config", 00:05:04.791 "params": { 00:05:04.791 "discovery_filter": "match_any", 00:05:04.791 "admin_cmd_passthru": { 00:05:04.791 "identify_ctrlr": false 00:05:04.791 } 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "nvmf_set_max_subsystems", 00:05:04.791 "params": { 00:05:04.791 "max_subsystems": 1024 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "nvmf_set_crdt", 00:05:04.791 "params": { 00:05:04.791 "crdt1": 0, 00:05:04.791 "crdt2": 0, 00:05:04.791 "crdt3": 0 00:05:04.791 } 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "method": "nvmf_create_transport", 00:05:04.791 "params": { 00:05:04.791 "trtype": "TCP", 00:05:04.791 "max_queue_depth": 128, 00:05:04.791 "max_io_qpairs_per_ctrlr": 127, 00:05:04.791 "in_capsule_data_size": 4096, 00:05:04.791 "max_io_size": 131072, 00:05:04.791 "io_unit_size": 131072, 00:05:04.791 "max_aq_depth": 128, 00:05:04.791 "num_shared_buffers": 511, 00:05:04.791 "buf_cache_size": 4294967295, 00:05:04.791 "dif_insert_or_strip": false, 00:05:04.791 "zcopy": false, 00:05:04.791 "c2h_success": true, 00:05:04.791 "sock_priority": 0, 00:05:04.791 "abort_timeout_sec": 1, 00:05:04.791 "ack_timeout": 0, 00:05:04.791 "data_wr_pool_size": 0 00:05:04.791 } 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 }, 00:05:04.791 { 00:05:04.791 "subsystem": "iscsi", 00:05:04.791 "config": [ 00:05:04.791 { 00:05:04.791 "method": "iscsi_set_options", 00:05:04.791 "params": { 00:05:04.791 "node_base": "iqn.2016-06.io.spdk", 00:05:04.791 "max_sessions": 128, 00:05:04.791 "max_connections_per_session": 2, 00:05:04.791 "max_queue_depth": 64, 00:05:04.791 "default_time2wait": 2, 00:05:04.791 "default_time2retain": 20, 00:05:04.791 "first_burst_length": 8192, 00:05:04.791 "immediate_data": true, 00:05:04.791 "allow_duplicated_isid": false, 00:05:04.791 "error_recovery_level": 0, 00:05:04.791 "nop_timeout": 60, 00:05:04.791 "nop_in_interval": 30, 00:05:04.791 "disable_chap": false, 00:05:04.791 "require_chap": false, 00:05:04.791 "mutual_chap": false, 00:05:04.791 "chap_group": 0, 00:05:04.791 "max_large_datain_per_connection": 64, 00:05:04.791 "max_r2t_per_connection": 4, 00:05:04.791 "pdu_pool_size": 36864, 00:05:04.791 "immediate_data_pool_size": 16384, 00:05:04.791 "data_out_pool_size": 2048 00:05:04.791 } 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 } 00:05:04.791 ] 00:05:04.791 } 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 456017 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 456017 ']' 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 456017 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456017 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456017' 00:05:04.791 killing process with pid 456017 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 456017 00:05:04.791 12:50:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 456017 00:05:05.052 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=456345 00:05:05.052 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.052 12:50:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 456345 ']' 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456345' 00:05:10.337 killing process with pid 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 456345 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:10.337 00:05:10.337 real 0m6.534s 00:05:10.337 user 0m6.391s 00:05:10.337 sys 0m0.541s 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.337 12:50:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.337 ************************************ 00:05:10.337 END TEST skip_rpc_with_json 00:05:10.337 ************************************ 00:05:10.337 12:50:31 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.337 12:50:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:10.337 12:50:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.337 12:50:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.337 12:50:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.337 ************************************ 00:05:10.337 START TEST skip_rpc_with_delay 00:05:10.337 ************************************ 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:10.337 [2024-07-15 12:50:32.095354] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:10.337 [2024-07-15 12:50:32.095440] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.337 00:05:10.337 real 0m0.084s 00:05:10.337 user 0m0.055s 00:05:10.337 sys 0m0.028s 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.337 12:50:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:10.337 ************************************ 00:05:10.337 END TEST skip_rpc_with_delay 00:05:10.337 ************************************ 00:05:10.337 12:50:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:10.337 12:50:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:10.337 12:50:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:10.337 12:50:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:10.337 12:50:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.337 12:50:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.337 12:50:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 ************************************ 00:05:10.612 START TEST exit_on_failed_rpc_init 00:05:10.612 ************************************ 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=457433 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 457433 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 457433 ']' 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.612 12:50:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.612 [2024-07-15 12:50:32.256688] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:10.612 [2024-07-15 12:50:32.256748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457433 ] 00:05:10.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.612 [2024-07-15 12:50:32.327001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.612 [2024-07-15 12:50:32.401647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.551 [2024-07-15 12:50:33.069135] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:11.551 [2024-07-15 12:50:33.069188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457747 ] 00:05:11.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.551 [2024-07-15 12:50:33.150730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.551 [2024-07-15 12:50:33.214824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.551 [2024-07-15 12:50:33.214886] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:11.551 [2024-07-15 12:50:33.214896] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:11.551 [2024-07-15 12:50:33.214903] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 457433 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 457433 ']' 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 457433 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 457433 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 457433' 00:05:11.551 killing process with pid 457433 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 457433 00:05:11.551 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 457433 00:05:11.810 00:05:11.810 real 0m1.344s 00:05:11.810 user 0m1.571s 00:05:11.810 sys 0m0.377s 00:05:11.810 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.810 12:50:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.810 ************************************ 00:05:11.810 END TEST exit_on_failed_rpc_init 00:05:11.810 ************************************ 00:05:11.810 12:50:33 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.810 12:50:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.810 00:05:11.810 real 0m13.649s 00:05:11.810 user 0m13.238s 00:05:11.810 sys 0m1.467s 00:05:11.810 12:50:33 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.810 12:50:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.810 ************************************ 00:05:11.810 END TEST skip_rpc 00:05:11.810 ************************************ 00:05:11.810 12:50:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.810 12:50:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:11.810 12:50:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.810 12:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.810 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:05:12.070 ************************************ 00:05:12.070 START TEST rpc_client 00:05:12.070 ************************************ 00:05:12.070 12:50:33 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:12.070 * Looking for test storage... 00:05:12.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:12.070 12:50:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:12.070 OK 00:05:12.070 12:50:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:12.070 00:05:12.070 real 0m0.122s 00:05:12.070 user 0m0.043s 00:05:12.070 sys 0m0.084s 00:05:12.070 12:50:33 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.070 12:50:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:12.070 ************************************ 00:05:12.070 END TEST rpc_client 00:05:12.070 ************************************ 00:05:12.070 12:50:33 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.070 12:50:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.070 12:50:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.070 12:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.070 12:50:33 -- common/autotest_common.sh@10 -- # set +x 00:05:12.070 ************************************ 00:05:12.070 START TEST json_config 00:05:12.070 ************************************ 00:05:12.070 12:50:33 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.331 12:50:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.331 12:50:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.331 12:50:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.331 12:50:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.331 12:50:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.331 12:50:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.331 12:50:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:12.331 12:50:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:12.331 12:50:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:12.331 INFO: JSON configuration test init 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.331 12:50:33 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:12.331 12:50:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:12.331 12:50:33 json_config -- json_config/common.sh@10 -- # shift 00:05:12.331 12:50:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.331 12:50:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.331 12:50:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.331 12:50:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.331 12:50:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.331 12:50:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=457943 00:05:12.331 12:50:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.331 Waiting for target to run... 00:05:12.331 12:50:33 json_config -- json_config/common.sh@25 -- # waitforlisten 457943 /var/tmp/spdk_tgt.sock 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@829 -- # '[' -z 457943 ']' 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.331 12:50:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.331 12:50:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.332 [2024-07-15 12:50:34.030382] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:12.332 [2024-07-15 12:50:34.030452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid457943 ] 00:05:12.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.901 [2024-07-15 12:50:34.444185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.901 [2024-07-15 12:50:34.496141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.159 12:50:34 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.159 12:50:34 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:13.159 12:50:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.159 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:13.160 12:50:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.160 12:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:13.160 12:50:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.160 12:50:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:13.160 12:50:34 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:13.160 12:50:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:13.747 12:50:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.747 12:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:13.747 12:50:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:13.747 12:50:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:14.007 12:50:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.007 12:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:14.007 12:50:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.007 12:50:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.007 12:50:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.007 MallocForNvmf0 00:05:14.007 12:50:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.007 12:50:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:14.267 MallocForNvmf1 00:05:14.267 12:50:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.267 12:50:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:14.267 [2024-07-15 12:50:36.051380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.267 12:50:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.267 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:14.527 12:50:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:14.527 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:14.787 12:50:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:14.787 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:14.787 12:50:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:14.787 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:15.047 [2024-07-15 12:50:36.653312] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:15.047 12:50:36 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:15.047 12:50:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.047 12:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.047 12:50:36 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:15.047 12:50:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.047 12:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.047 12:50:36 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:15.047 12:50:36 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.047 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:15.308 MallocBdevForConfigChangeCheck 00:05:15.308 12:50:36 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:15.308 12:50:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.308 12:50:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.308 12:50:36 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:15.308 12:50:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.570 12:50:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:15.570 INFO: shutting down applications... 00:05:15.570 12:50:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:15.570 12:50:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:15.570 12:50:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:15.570 12:50:37 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.830 Calling clear_iscsi_subsystem 00:05:15.830 Calling clear_nvmf_subsystem 00:05:15.830 Calling clear_nbd_subsystem 00:05:15.830 Calling clear_ublk_subsystem 00:05:15.830 Calling clear_vhost_blk_subsystem 00:05:15.830 Calling clear_vhost_scsi_subsystem 00:05:15.830 Calling clear_bdev_subsystem 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.830 12:50:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.403 12:50:37 json_config -- json_config/json_config.sh@345 -- # break 00:05:16.403 12:50:37 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:16.403 12:50:37 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:16.403 12:50:37 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.403 12:50:37 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.403 12:50:37 json_config -- json_config/common.sh@35 -- # [[ -n 457943 ]] 00:05:16.403 12:50:37 json_config -- json_config/common.sh@38 -- # kill -SIGINT 457943 00:05:16.403 12:50:37 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.403 12:50:37 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.403 12:50:37 json_config -- json_config/common.sh@41 -- # kill -0 457943 00:05:16.403 12:50:37 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.704 12:50:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.704 12:50:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.704 12:50:38 json_config -- json_config/common.sh@41 -- # kill -0 457943 00:05:16.704 12:50:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.704 12:50:38 json_config -- json_config/common.sh@43 -- # break 00:05:16.704 12:50:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.704 12:50:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.704 SPDK target shutdown done 00:05:16.704 12:50:38 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:16.704 INFO: relaunching applications... 00:05:16.704 12:50:38 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.704 12:50:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.704 12:50:38 json_config -- json_config/common.sh@10 -- # shift 00:05:16.704 12:50:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.704 12:50:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.704 12:50:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.704 12:50:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.704 12:50:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.704 12:50:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=458994 00:05:16.704 12:50:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.704 Waiting for target to run... 00:05:16.704 12:50:38 json_config -- json_config/common.sh@25 -- # waitforlisten 458994 /var/tmp/spdk_tgt.sock 00:05:16.704 12:50:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 458994 ']' 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.704 12:50:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.996 [2024-07-15 12:50:38.518310] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:16.996 [2024-07-15 12:50:38.518365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid458994 ] 00:05:16.996 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.996 [2024-07-15 12:50:38.805037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.258 [2024-07-15 12:50:38.857245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.829 [2024-07-15 12:50:39.361203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.829 [2024-07-15 12:50:39.393573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.829 12:50:39 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.829 12:50:39 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:17.829 12:50:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.829 00:05:17.829 12:50:39 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:17.829 12:50:39 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:17.829 INFO: Checking if target configuration is the same... 00:05:17.829 12:50:39 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.829 12:50:39 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:17.829 12:50:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.829 + '[' 2 -ne 2 ']' 00:05:17.829 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:17.829 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:17.829 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.829 +++ basename /dev/fd/62 00:05:17.829 ++ mktemp /tmp/62.XXX 00:05:17.829 + tmp_file_1=/tmp/62.hAo 00:05:17.829 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:17.829 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:17.829 + tmp_file_2=/tmp/spdk_tgt_config.json.BEa 00:05:17.829 + ret=0 00:05:17.829 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.090 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.090 + diff -u /tmp/62.hAo /tmp/spdk_tgt_config.json.BEa 00:05:18.090 + echo 'INFO: JSON config files are the same' 00:05:18.090 INFO: JSON config files are the same 00:05:18.090 + rm /tmp/62.hAo /tmp/spdk_tgt_config.json.BEa 00:05:18.090 + exit 0 00:05:18.090 12:50:39 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:18.090 12:50:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.090 INFO: changing configuration and checking if this can be detected... 00:05:18.090 12:50:39 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.090 12:50:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.351 12:50:39 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.351 12:50:39 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:18.351 12:50:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.351 + '[' 2 -ne 2 ']' 00:05:18.351 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.351 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.351 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.351 +++ basename /dev/fd/62 00:05:18.351 ++ mktemp /tmp/62.XXX 00:05:18.351 + tmp_file_1=/tmp/62.uBR 00:05:18.351 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.351 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.351 + tmp_file_2=/tmp/spdk_tgt_config.json.RnC 00:05:18.351 + ret=0 00:05:18.351 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.611 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.611 + diff -u /tmp/62.uBR /tmp/spdk_tgt_config.json.RnC 00:05:18.611 + ret=1 00:05:18.611 + echo '=== Start of file: /tmp/62.uBR ===' 00:05:18.611 + cat /tmp/62.uBR 00:05:18.611 + echo '=== End of file: /tmp/62.uBR ===' 00:05:18.611 + echo '' 00:05:18.611 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RnC ===' 00:05:18.611 + cat /tmp/spdk_tgt_config.json.RnC 00:05:18.611 + echo '=== End of file: /tmp/spdk_tgt_config.json.RnC ===' 00:05:18.611 + echo '' 00:05:18.611 + rm /tmp/62.uBR /tmp/spdk_tgt_config.json.RnC 00:05:18.611 + exit 1 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:18.611 INFO: configuration change detected. 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@317 -- # [[ -n 458994 ]] 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.611 12:50:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.611 12:50:40 json_config -- json_config/json_config.sh@323 -- # killprocess 458994 00:05:18.612 12:50:40 json_config -- common/autotest_common.sh@948 -- # '[' -z 458994 ']' 00:05:18.612 12:50:40 json_config -- common/autotest_common.sh@952 -- # kill -0 458994 00:05:18.612 12:50:40 json_config -- common/autotest_common.sh@953 -- # uname 00:05:18.612 12:50:40 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.612 12:50:40 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 458994 00:05:18.873 12:50:40 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.873 12:50:40 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.873 12:50:40 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 458994' 00:05:18.873 killing process with pid 458994 00:05:18.873 12:50:40 json_config -- common/autotest_common.sh@967 -- # kill 458994 00:05:18.873 12:50:40 json_config -- common/autotest_common.sh@972 -- # wait 458994 00:05:19.134 12:50:40 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.134 12:50:40 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:19.134 12:50:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.134 12:50:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.134 12:50:40 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:19.134 12:50:40 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:19.134 INFO: Success 00:05:19.134 00:05:19.134 real 0m6.910s 00:05:19.134 user 0m8.133s 00:05:19.134 sys 0m1.853s 00:05:19.134 12:50:40 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.134 12:50:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.134 ************************************ 00:05:19.134 END TEST json_config 00:05:19.134 ************************************ 00:05:19.134 12:50:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:19.134 12:50:40 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.134 12:50:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.134 12:50:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.134 12:50:40 -- common/autotest_common.sh@10 -- # set +x 00:05:19.134 ************************************ 00:05:19.134 START TEST json_config_extra_key 00:05:19.134 ************************************ 00:05:19.134 12:50:40 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:19.134 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.134 12:50:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.134 12:50:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.134 12:50:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.134 12:50:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:19.135 12:50:40 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.135 12:50:40 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.135 12:50:40 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.135 12:50:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.135 12:50:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.135 12:50:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.135 12:50:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.135 12:50:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.135 12:50:40 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.135 INFO: launching applications... 00:05:19.135 12:50:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=459673 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.135 Waiting for target to run... 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 459673 /var/tmp/spdk_tgt.sock 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 459673 ']' 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.135 12:50:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.135 12:50:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.396 [2024-07-15 12:50:41.002431] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:19.396 [2024-07-15 12:50:41.002506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459673 ] 00:05:19.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.658 [2024-07-15 12:50:41.318837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.658 [2024-07-15 12:50:41.375370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.230 12:50:41 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.230 12:50:41 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.230 00:05:20.230 12:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.230 INFO: shutting down applications... 00:05:20.230 12:50:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 459673 ]] 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 459673 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 459673 00:05:20.230 12:50:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 459673 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.492 12:50:42 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.492 SPDK target shutdown done 00:05:20.492 12:50:42 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:20.492 Success 00:05:20.492 00:05:20.492 real 0m1.442s 00:05:20.492 user 0m1.035s 00:05:20.492 sys 0m0.425s 00:05:20.492 12:50:42 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.492 12:50:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.492 ************************************ 00:05:20.492 END TEST json_config_extra_key 00:05:20.492 ************************************ 00:05:20.754 12:50:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.754 12:50:42 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.754 12:50:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.754 12:50:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.754 12:50:42 -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 ************************************ 00:05:20.754 START TEST alias_rpc 00:05:20.754 ************************************ 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:20.754 * Looking for test storage... 00:05:20.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:20.754 12:50:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.754 12:50:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=459954 00:05:20.754 12:50:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 459954 00:05:20.754 12:50:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 459954 ']' 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.754 12:50:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 [2024-07-15 12:50:42.492896] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:20.754 [2024-07-15 12:50:42.492952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid459954 ] 00:05:20.754 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.754 [2024-07-15 12:50:42.554420] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.014 [2024-07-15 12:50:42.621367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.588 12:50:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.588 12:50:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.588 12:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:21.847 12:50:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 459954 00:05:21.847 12:50:43 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 459954 ']' 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 459954 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 459954 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 459954' 00:05:21.848 killing process with pid 459954 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@967 -- # kill 459954 00:05:21.848 12:50:43 alias_rpc -- common/autotest_common.sh@972 -- # wait 459954 00:05:22.108 00:05:22.108 real 0m1.357s 00:05:22.108 user 0m1.479s 00:05:22.108 sys 0m0.362s 00:05:22.108 12:50:43 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.108 12:50:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.108 ************************************ 00:05:22.108 END TEST alias_rpc 00:05:22.108 ************************************ 00:05:22.108 12:50:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:22.108 12:50:43 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:22.108 12:50:43 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.108 12:50:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.108 12:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.108 12:50:43 -- common/autotest_common.sh@10 -- # set +x 00:05:22.108 ************************************ 00:05:22.108 START TEST spdkcli_tcp 00:05:22.108 ************************************ 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:22.108 * Looking for test storage... 00:05:22.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=460231 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 460231 00:05:22.108 12:50:43 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 460231 ']' 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.108 12:50:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.368 [2024-07-15 12:50:43.951402] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:22.368 [2024-07-15 12:50:43.951456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460231 ] 00:05:22.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.368 [2024-07-15 12:50:44.018396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.368 [2024-07-15 12:50:44.085017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.368 [2024-07-15 12:50:44.085020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.939 12:50:44 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.939 12:50:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:22.939 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=460558 00:05:22.939 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.939 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.200 [ 00:05:23.200 "bdev_malloc_delete", 00:05:23.200 "bdev_malloc_create", 00:05:23.200 "bdev_null_resize", 00:05:23.200 "bdev_null_delete", 00:05:23.200 "bdev_null_create", 00:05:23.200 "bdev_nvme_cuse_unregister", 00:05:23.200 "bdev_nvme_cuse_register", 00:05:23.200 "bdev_opal_new_user", 00:05:23.200 "bdev_opal_set_lock_state", 00:05:23.200 "bdev_opal_delete", 00:05:23.200 "bdev_opal_get_info", 00:05:23.200 "bdev_opal_create", 00:05:23.200 "bdev_nvme_opal_revert", 00:05:23.200 "bdev_nvme_opal_init", 00:05:23.200 "bdev_nvme_send_cmd", 00:05:23.200 "bdev_nvme_get_path_iostat", 00:05:23.200 "bdev_nvme_get_mdns_discovery_info", 00:05:23.200 "bdev_nvme_stop_mdns_discovery", 00:05:23.200 "bdev_nvme_start_mdns_discovery", 00:05:23.200 "bdev_nvme_set_multipath_policy", 00:05:23.200 "bdev_nvme_set_preferred_path", 00:05:23.200 "bdev_nvme_get_io_paths", 00:05:23.200 "bdev_nvme_remove_error_injection", 00:05:23.200 "bdev_nvme_add_error_injection", 00:05:23.200 "bdev_nvme_get_discovery_info", 00:05:23.200 "bdev_nvme_stop_discovery", 00:05:23.200 "bdev_nvme_start_discovery", 00:05:23.200 "bdev_nvme_get_controller_health_info", 00:05:23.200 "bdev_nvme_disable_controller", 00:05:23.200 "bdev_nvme_enable_controller", 00:05:23.200 "bdev_nvme_reset_controller", 00:05:23.200 "bdev_nvme_get_transport_statistics", 00:05:23.200 "bdev_nvme_apply_firmware", 00:05:23.200 "bdev_nvme_detach_controller", 00:05:23.200 "bdev_nvme_get_controllers", 00:05:23.200 "bdev_nvme_attach_controller", 00:05:23.200 "bdev_nvme_set_hotplug", 00:05:23.200 "bdev_nvme_set_options", 00:05:23.200 "bdev_passthru_delete", 00:05:23.200 "bdev_passthru_create", 00:05:23.200 "bdev_lvol_set_parent_bdev", 00:05:23.200 "bdev_lvol_set_parent", 00:05:23.200 "bdev_lvol_check_shallow_copy", 00:05:23.200 "bdev_lvol_start_shallow_copy", 00:05:23.200 "bdev_lvol_grow_lvstore", 00:05:23.200 "bdev_lvol_get_lvols", 00:05:23.200 "bdev_lvol_get_lvstores", 00:05:23.200 "bdev_lvol_delete", 00:05:23.200 "bdev_lvol_set_read_only", 00:05:23.200 "bdev_lvol_resize", 00:05:23.200 "bdev_lvol_decouple_parent", 00:05:23.200 "bdev_lvol_inflate", 00:05:23.200 "bdev_lvol_rename", 00:05:23.200 "bdev_lvol_clone_bdev", 00:05:23.200 "bdev_lvol_clone", 00:05:23.200 "bdev_lvol_snapshot", 00:05:23.200 "bdev_lvol_create", 00:05:23.200 "bdev_lvol_delete_lvstore", 00:05:23.200 "bdev_lvol_rename_lvstore", 00:05:23.200 "bdev_lvol_create_lvstore", 00:05:23.200 "bdev_raid_set_options", 00:05:23.200 "bdev_raid_remove_base_bdev", 00:05:23.200 "bdev_raid_add_base_bdev", 00:05:23.200 "bdev_raid_delete", 00:05:23.200 "bdev_raid_create", 00:05:23.200 "bdev_raid_get_bdevs", 00:05:23.200 "bdev_error_inject_error", 00:05:23.200 "bdev_error_delete", 00:05:23.200 "bdev_error_create", 00:05:23.200 "bdev_split_delete", 00:05:23.200 "bdev_split_create", 00:05:23.200 "bdev_delay_delete", 00:05:23.200 "bdev_delay_create", 00:05:23.200 "bdev_delay_update_latency", 00:05:23.200 "bdev_zone_block_delete", 00:05:23.200 "bdev_zone_block_create", 00:05:23.200 "blobfs_create", 00:05:23.200 "blobfs_detect", 00:05:23.200 "blobfs_set_cache_size", 00:05:23.200 "bdev_aio_delete", 00:05:23.200 "bdev_aio_rescan", 00:05:23.200 "bdev_aio_create", 00:05:23.200 "bdev_ftl_set_property", 00:05:23.200 "bdev_ftl_get_properties", 00:05:23.200 "bdev_ftl_get_stats", 00:05:23.200 "bdev_ftl_unmap", 00:05:23.200 "bdev_ftl_unload", 00:05:23.200 "bdev_ftl_delete", 00:05:23.200 "bdev_ftl_load", 00:05:23.200 "bdev_ftl_create", 00:05:23.200 "bdev_virtio_attach_controller", 00:05:23.200 "bdev_virtio_scsi_get_devices", 00:05:23.200 "bdev_virtio_detach_controller", 00:05:23.200 "bdev_virtio_blk_set_hotplug", 00:05:23.200 "bdev_iscsi_delete", 00:05:23.200 "bdev_iscsi_create", 00:05:23.200 "bdev_iscsi_set_options", 00:05:23.200 "accel_error_inject_error", 00:05:23.200 "ioat_scan_accel_module", 00:05:23.200 "dsa_scan_accel_module", 00:05:23.200 "iaa_scan_accel_module", 00:05:23.200 "vfu_virtio_create_scsi_endpoint", 00:05:23.200 "vfu_virtio_scsi_remove_target", 00:05:23.200 "vfu_virtio_scsi_add_target", 00:05:23.200 "vfu_virtio_create_blk_endpoint", 00:05:23.200 "vfu_virtio_delete_endpoint", 00:05:23.200 "keyring_file_remove_key", 00:05:23.200 "keyring_file_add_key", 00:05:23.200 "keyring_linux_set_options", 00:05:23.200 "iscsi_get_histogram", 00:05:23.200 "iscsi_enable_histogram", 00:05:23.200 "iscsi_set_options", 00:05:23.200 "iscsi_get_auth_groups", 00:05:23.200 "iscsi_auth_group_remove_secret", 00:05:23.200 "iscsi_auth_group_add_secret", 00:05:23.200 "iscsi_delete_auth_group", 00:05:23.200 "iscsi_create_auth_group", 00:05:23.200 "iscsi_set_discovery_auth", 00:05:23.200 "iscsi_get_options", 00:05:23.201 "iscsi_target_node_request_logout", 00:05:23.201 "iscsi_target_node_set_redirect", 00:05:23.201 "iscsi_target_node_set_auth", 00:05:23.201 "iscsi_target_node_add_lun", 00:05:23.201 "iscsi_get_stats", 00:05:23.201 "iscsi_get_connections", 00:05:23.201 "iscsi_portal_group_set_auth", 00:05:23.201 "iscsi_start_portal_group", 00:05:23.201 "iscsi_delete_portal_group", 00:05:23.201 "iscsi_create_portal_group", 00:05:23.201 "iscsi_get_portal_groups", 00:05:23.201 "iscsi_delete_target_node", 00:05:23.201 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.201 "iscsi_target_node_add_pg_ig_maps", 00:05:23.201 "iscsi_create_target_node", 00:05:23.201 "iscsi_get_target_nodes", 00:05:23.201 "iscsi_delete_initiator_group", 00:05:23.201 "iscsi_initiator_group_remove_initiators", 00:05:23.201 "iscsi_initiator_group_add_initiators", 00:05:23.201 "iscsi_create_initiator_group", 00:05:23.201 "iscsi_get_initiator_groups", 00:05:23.201 "nvmf_set_crdt", 00:05:23.201 "nvmf_set_config", 00:05:23.201 "nvmf_set_max_subsystems", 00:05:23.201 "nvmf_stop_mdns_prr", 00:05:23.201 "nvmf_publish_mdns_prr", 00:05:23.201 "nvmf_subsystem_get_listeners", 00:05:23.201 "nvmf_subsystem_get_qpairs", 00:05:23.201 "nvmf_subsystem_get_controllers", 00:05:23.201 "nvmf_get_stats", 00:05:23.201 "nvmf_get_transports", 00:05:23.201 "nvmf_create_transport", 00:05:23.201 "nvmf_get_targets", 00:05:23.201 "nvmf_delete_target", 00:05:23.201 "nvmf_create_target", 00:05:23.201 "nvmf_subsystem_allow_any_host", 00:05:23.201 "nvmf_subsystem_remove_host", 00:05:23.201 "nvmf_subsystem_add_host", 00:05:23.201 "nvmf_ns_remove_host", 00:05:23.201 "nvmf_ns_add_host", 00:05:23.201 "nvmf_subsystem_remove_ns", 00:05:23.201 "nvmf_subsystem_add_ns", 00:05:23.201 "nvmf_subsystem_listener_set_ana_state", 00:05:23.201 "nvmf_discovery_get_referrals", 00:05:23.201 "nvmf_discovery_remove_referral", 00:05:23.201 "nvmf_discovery_add_referral", 00:05:23.201 "nvmf_subsystem_remove_listener", 00:05:23.201 "nvmf_subsystem_add_listener", 00:05:23.201 "nvmf_delete_subsystem", 00:05:23.201 "nvmf_create_subsystem", 00:05:23.201 "nvmf_get_subsystems", 00:05:23.201 "env_dpdk_get_mem_stats", 00:05:23.201 "nbd_get_disks", 00:05:23.201 "nbd_stop_disk", 00:05:23.201 "nbd_start_disk", 00:05:23.201 "ublk_recover_disk", 00:05:23.201 "ublk_get_disks", 00:05:23.201 "ublk_stop_disk", 00:05:23.201 "ublk_start_disk", 00:05:23.201 "ublk_destroy_target", 00:05:23.201 "ublk_create_target", 00:05:23.201 "virtio_blk_create_transport", 00:05:23.201 "virtio_blk_get_transports", 00:05:23.201 "vhost_controller_set_coalescing", 00:05:23.201 "vhost_get_controllers", 00:05:23.201 "vhost_delete_controller", 00:05:23.201 "vhost_create_blk_controller", 00:05:23.201 "vhost_scsi_controller_remove_target", 00:05:23.201 "vhost_scsi_controller_add_target", 00:05:23.201 "vhost_start_scsi_controller", 00:05:23.201 "vhost_create_scsi_controller", 00:05:23.201 "thread_set_cpumask", 00:05:23.201 "framework_get_governor", 00:05:23.201 "framework_get_scheduler", 00:05:23.201 "framework_set_scheduler", 00:05:23.201 "framework_get_reactors", 00:05:23.201 "thread_get_io_channels", 00:05:23.201 "thread_get_pollers", 00:05:23.201 "thread_get_stats", 00:05:23.201 "framework_monitor_context_switch", 00:05:23.201 "spdk_kill_instance", 00:05:23.201 "log_enable_timestamps", 00:05:23.201 "log_get_flags", 00:05:23.201 "log_clear_flag", 00:05:23.201 "log_set_flag", 00:05:23.201 "log_get_level", 00:05:23.201 "log_set_level", 00:05:23.201 "log_get_print_level", 00:05:23.201 "log_set_print_level", 00:05:23.201 "framework_enable_cpumask_locks", 00:05:23.201 "framework_disable_cpumask_locks", 00:05:23.201 "framework_wait_init", 00:05:23.201 "framework_start_init", 00:05:23.201 "scsi_get_devices", 00:05:23.201 "bdev_get_histogram", 00:05:23.201 "bdev_enable_histogram", 00:05:23.201 "bdev_set_qos_limit", 00:05:23.201 "bdev_set_qd_sampling_period", 00:05:23.201 "bdev_get_bdevs", 00:05:23.201 "bdev_reset_iostat", 00:05:23.201 "bdev_get_iostat", 00:05:23.201 "bdev_examine", 00:05:23.201 "bdev_wait_for_examine", 00:05:23.201 "bdev_set_options", 00:05:23.201 "notify_get_notifications", 00:05:23.201 "notify_get_types", 00:05:23.201 "accel_get_stats", 00:05:23.201 "accel_set_options", 00:05:23.201 "accel_set_driver", 00:05:23.201 "accel_crypto_key_destroy", 00:05:23.201 "accel_crypto_keys_get", 00:05:23.201 "accel_crypto_key_create", 00:05:23.201 "accel_assign_opc", 00:05:23.201 "accel_get_module_info", 00:05:23.201 "accel_get_opc_assignments", 00:05:23.201 "vmd_rescan", 00:05:23.201 "vmd_remove_device", 00:05:23.201 "vmd_enable", 00:05:23.201 "sock_get_default_impl", 00:05:23.201 "sock_set_default_impl", 00:05:23.201 "sock_impl_set_options", 00:05:23.201 "sock_impl_get_options", 00:05:23.201 "iobuf_get_stats", 00:05:23.201 "iobuf_set_options", 00:05:23.201 "keyring_get_keys", 00:05:23.201 "framework_get_pci_devices", 00:05:23.201 "framework_get_config", 00:05:23.201 "framework_get_subsystems", 00:05:23.201 "vfu_tgt_set_base_path", 00:05:23.201 "trace_get_info", 00:05:23.201 "trace_get_tpoint_group_mask", 00:05:23.201 "trace_disable_tpoint_group", 00:05:23.201 "trace_enable_tpoint_group", 00:05:23.201 "trace_clear_tpoint_mask", 00:05:23.201 "trace_set_tpoint_mask", 00:05:23.201 "spdk_get_version", 00:05:23.201 "rpc_get_methods" 00:05:23.201 ] 00:05:23.201 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.201 12:50:44 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.201 12:50:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.201 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.202 12:50:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 460231 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 460231 ']' 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 460231 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460231 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460231' 00:05:23.202 killing process with pid 460231 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 460231 00:05:23.202 12:50:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 460231 00:05:23.462 00:05:23.462 real 0m1.381s 00:05:23.462 user 0m2.557s 00:05:23.462 sys 0m0.390s 00:05:23.462 12:50:45 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.462 12:50:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.462 ************************************ 00:05:23.462 END TEST spdkcli_tcp 00:05:23.462 ************************************ 00:05:23.462 12:50:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.462 12:50:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.462 12:50:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.462 12:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.462 12:50:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.462 ************************************ 00:05:23.462 START TEST dpdk_mem_utility 00:05:23.462 ************************************ 00:05:23.462 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:23.722 * Looking for test storage... 00:05:23.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:23.722 12:50:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:23.722 12:50:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=460631 00:05:23.722 12:50:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 460631 00:05:23.722 12:50:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 460631 ']' 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.722 12:50:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.722 [2024-07-15 12:50:45.399533] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:23.722 [2024-07-15 12:50:45.399593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid460631 ] 00:05:23.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.722 [2024-07-15 12:50:45.469958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.722 [2024-07-15 12:50:45.543918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.663 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.663 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:24.663 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:24.663 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:24.663 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.663 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.663 { 00:05:24.663 "filename": "/tmp/spdk_mem_dump.txt" 00:05:24.663 } 00:05:24.663 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.663 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:24.663 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:24.663 1 heaps totaling size 814.000000 MiB 00:05:24.663 size: 814.000000 MiB heap id: 0 00:05:24.663 end heaps---------- 00:05:24.663 8 mempools totaling size 598.116089 MiB 00:05:24.663 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:24.663 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:24.663 size: 84.521057 MiB name: bdev_io_460631 00:05:24.663 size: 51.011292 MiB name: evtpool_460631 00:05:24.663 size: 50.003479 MiB name: msgpool_460631 00:05:24.663 size: 21.763794 MiB name: PDU_Pool 00:05:24.663 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:24.663 size: 0.026123 MiB name: Session_Pool 00:05:24.663 end mempools------- 00:05:24.663 6 memzones totaling size 4.142822 MiB 00:05:24.663 size: 1.000366 MiB name: RG_ring_0_460631 00:05:24.663 size: 1.000366 MiB name: RG_ring_1_460631 00:05:24.663 size: 1.000366 MiB name: RG_ring_4_460631 00:05:24.663 size: 1.000366 MiB name: RG_ring_5_460631 00:05:24.663 size: 0.125366 MiB name: RG_ring_2_460631 00:05:24.663 size: 0.015991 MiB name: RG_ring_3_460631 00:05:24.663 end memzones------- 00:05:24.663 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:24.663 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:24.663 list of free elements. size: 12.519348 MiB 00:05:24.663 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:24.663 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:24.663 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:24.663 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:24.663 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:24.663 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:24.663 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:24.663 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:24.663 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:24.663 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:24.663 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:24.663 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:24.663 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:24.663 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:24.663 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:24.663 list of standard malloc elements. size: 199.218079 MiB 00:05:24.663 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:24.663 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:24.663 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:24.663 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:24.663 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:24.663 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:24.663 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:24.663 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:24.663 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:24.663 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:24.663 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:24.663 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:24.663 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:24.663 list of memzone associated elements. size: 602.262573 MiB 00:05:24.663 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:24.663 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:24.663 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:24.663 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:24.663 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:24.663 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_460631_0 00:05:24.663 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:24.663 associated memzone info: size: 48.002930 MiB name: MP_evtpool_460631_0 00:05:24.663 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:24.663 associated memzone info: size: 48.002930 MiB name: MP_msgpool_460631_0 00:05:24.663 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:24.663 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:24.663 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:24.663 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:24.663 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:24.663 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_460631 00:05:24.663 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:24.663 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_460631 00:05:24.663 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:24.663 associated memzone info: size: 1.007996 MiB name: MP_evtpool_460631 00:05:24.663 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:24.663 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:24.663 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:24.663 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:24.663 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:24.663 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:24.663 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:24.663 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:24.663 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:24.663 associated memzone info: size: 1.000366 MiB name: RG_ring_0_460631 00:05:24.663 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:24.663 associated memzone info: size: 1.000366 MiB name: RG_ring_1_460631 00:05:24.663 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:24.663 associated memzone info: size: 1.000366 MiB name: RG_ring_4_460631 00:05:24.663 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:24.663 associated memzone info: size: 1.000366 MiB name: RG_ring_5_460631 00:05:24.663 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:24.663 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_460631 00:05:24.663 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:24.663 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:24.663 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:24.663 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:24.663 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:24.663 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:24.663 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:24.663 associated memzone info: size: 0.125366 MiB name: RG_ring_2_460631 00:05:24.664 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:24.664 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:24.664 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:24.664 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:24.664 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:24.664 associated memzone info: size: 0.015991 MiB name: RG_ring_3_460631 00:05:24.664 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:24.664 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:24.664 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:24.664 associated memzone info: size: 0.000183 MiB name: MP_msgpool_460631 00:05:24.664 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:24.664 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_460631 00:05:24.664 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:24.664 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:24.664 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:24.664 12:50:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 460631 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 460631 ']' 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 460631 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 460631 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 460631' 00:05:24.664 killing process with pid 460631 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 460631 00:05:24.664 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 460631 00:05:24.924 00:05:24.924 real 0m1.284s 00:05:24.924 user 0m1.354s 00:05:24.924 sys 0m0.374s 00:05:24.924 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.924 12:50:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.924 ************************************ 00:05:24.924 END TEST dpdk_mem_utility 00:05:24.924 ************************************ 00:05:24.924 12:50:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.925 12:50:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:24.925 12:50:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.925 12:50:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.925 12:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:24.925 ************************************ 00:05:24.925 START TEST event 00:05:24.925 ************************************ 00:05:24.925 12:50:46 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:24.925 * Looking for test storage... 00:05:24.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:24.925 12:50:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:24.925 12:50:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:24.925 12:50:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:24.925 12:50:46 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:24.925 12:50:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.925 12:50:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.925 ************************************ 00:05:24.925 START TEST event_perf 00:05:24.925 ************************************ 00:05:24.925 12:50:46 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.185 Running I/O for 1 seconds...[2024-07-15 12:50:46.761820] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:25.185 [2024-07-15 12:50:46.761895] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461017 ] 00:05:25.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.185 [2024-07-15 12:50:46.833816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.185 [2024-07-15 12:50:46.905029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.185 [2024-07-15 12:50:46.905146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.185 [2024-07-15 12:50:46.905302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.185 Running I/O for 1 seconds...[2024-07-15 12:50:46.905302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.128 00:05:26.128 lcore 0: 178218 00:05:26.128 lcore 1: 178216 00:05:26.128 lcore 2: 178212 00:05:26.128 lcore 3: 178215 00:05:26.389 done. 00:05:26.389 00:05:26.389 real 0m1.217s 00:05:26.389 user 0m4.137s 00:05:26.389 sys 0m0.075s 00:05:26.389 12:50:47 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.389 12:50:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.389 ************************************ 00:05:26.389 END TEST event_perf 00:05:26.389 ************************************ 00:05:26.389 12:50:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.389 12:50:47 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.389 12:50:47 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:26.389 12:50:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.389 12:50:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.389 ************************************ 00:05:26.389 START TEST event_reactor 00:05:26.389 ************************************ 00:05:26.389 12:50:48 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:26.389 [2024-07-15 12:50:48.056425] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:26.389 [2024-07-15 12:50:48.056552] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461374 ] 00:05:26.389 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.389 [2024-07-15 12:50:48.132856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.389 [2024-07-15 12:50:48.198627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.777 test_start 00:05:27.777 oneshot 00:05:27.777 tick 100 00:05:27.777 tick 100 00:05:27.777 tick 250 00:05:27.777 tick 100 00:05:27.777 tick 100 00:05:27.777 tick 100 00:05:27.777 tick 250 00:05:27.777 tick 500 00:05:27.777 tick 100 00:05:27.777 tick 100 00:05:27.777 tick 250 00:05:27.777 tick 100 00:05:27.777 tick 100 00:05:27.777 test_end 00:05:27.777 00:05:27.777 real 0m1.219s 00:05:27.777 user 0m1.142s 00:05:27.777 sys 0m0.072s 00:05:27.777 12:50:49 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.777 12:50:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:27.777 ************************************ 00:05:27.777 END TEST event_reactor 00:05:27.777 ************************************ 00:05:27.777 12:50:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:27.777 12:50:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.777 12:50:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:27.777 12:50:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.777 12:50:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.777 ************************************ 00:05:27.777 START TEST event_reactor_perf 00:05:27.777 ************************************ 00:05:27.777 12:50:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:27.777 [2024-07-15 12:50:49.348838] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:27.777 [2024-07-15 12:50:49.348916] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461602 ] 00:05:27.777 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.777 [2024-07-15 12:50:49.418284] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.777 [2024-07-15 12:50:49.483350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.719 test_start 00:05:28.719 test_end 00:05:28.719 Performance: 368951 events per second 00:05:28.719 00:05:28.719 real 0m1.209s 00:05:28.719 user 0m1.130s 00:05:28.719 sys 0m0.075s 00:05:28.719 12:50:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.719 12:50:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.719 ************************************ 00:05:28.719 END TEST event_reactor_perf 00:05:28.719 ************************************ 00:05:28.981 12:50:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:28.981 12:50:50 event -- event/event.sh@49 -- # uname -s 00:05:28.981 12:50:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:28.981 12:50:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.981 12:50:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.981 12:50:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.981 12:50:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.981 ************************************ 00:05:28.981 START TEST event_scheduler 00:05:28.981 ************************************ 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:28.981 * Looking for test storage... 00:05:28.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:28.981 12:50:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:28.981 12:50:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=461827 00:05:28.981 12:50:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.981 12:50:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:28.981 12:50:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 461827 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 461827 ']' 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.981 12:50:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.981 [2024-07-15 12:50:50.767162] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:28.981 [2024-07-15 12:50:50.767228] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid461827 ] 00:05:28.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.243 [2024-07-15 12:50:50.828977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.243 [2024-07-15 12:50:50.895988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.243 [2024-07-15 12:50:50.896126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.243 [2024-07-15 12:50:50.896291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.243 [2024-07-15 12:50:50.896292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:29.814 12:50:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 [2024-07-15 12:50:51.558340] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:29.814 [2024-07-15 12:50:51.558353] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:29.814 [2024-07-15 12:50:51.558360] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:29.814 [2024-07-15 12:50:51.558364] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:29.814 [2024-07-15 12:50:51.558368] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.814 12:50:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:29.814 [2024-07-15 12:50:51.612709] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.814 12:50:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.814 12:50:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.074 ************************************ 00:05:30.074 START TEST scheduler_create_thread 00:05:30.074 ************************************ 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.074 2 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.074 3 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.074 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 4 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 5 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 6 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 7 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 8 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.075 9 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.075 12:50:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.645 10 00:05:30.645 12:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.645 12:50:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:30.645 12:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.645 12:50:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.030 12:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.030 12:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:32.030 12:50:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:32.030 12:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.030 12:50:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.620 12:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.620 12:50:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.620 12:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.620 12:50:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.570 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.570 12:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:33.570 12:50:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:33.570 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.570 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.141 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.141 00:05:34.141 real 0m4.223s 00:05:34.141 user 0m0.027s 00:05:34.141 sys 0m0.004s 00:05:34.141 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.142 12:50:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.142 ************************************ 00:05:34.142 END TEST scheduler_create_thread 00:05:34.142 ************************************ 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:34.142 12:50:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.142 12:50:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 461827 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 461827 ']' 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 461827 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 461827 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:34.142 12:50:55 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:34.402 12:50:55 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 461827' 00:05:34.402 killing process with pid 461827 00:05:34.402 12:50:55 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 461827 00:05:34.402 12:50:55 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 461827 00:05:34.662 [2024-07-15 12:50:56.254080] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.662 00:05:34.662 real 0m5.807s 00:05:34.662 user 0m13.672s 00:05:34.662 sys 0m0.384s 00:05:34.662 12:50:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.662 12:50:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.662 ************************************ 00:05:34.662 END TEST event_scheduler 00:05:34.662 ************************************ 00:05:34.662 12:50:56 event -- common/autotest_common.sh@1142 -- # return 0 00:05:34.662 12:50:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.662 12:50:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.662 12:50:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.662 12:50:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.662 12:50:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.922 ************************************ 00:05:34.922 START TEST app_repeat 00:05:34.922 ************************************ 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=463176 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 463176' 00:05:34.922 Process app_repeat pid: 463176 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.922 spdk_app_start Round 0 00:05:34.922 12:50:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 463176 /var/tmp/spdk-nbd.sock 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 463176 ']' 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.922 12:50:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.922 [2024-07-15 12:50:56.539420] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:34.922 [2024-07-15 12:50:56.539487] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463176 ] 00:05:34.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.922 [2024-07-15 12:50:56.610067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.922 [2024-07-15 12:50:56.681288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.922 [2024-07-15 12:50:56.681306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.863 12:50:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.863 12:50:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.863 12:50:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.863 Malloc0 00:05:35.863 12:50:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.863 Malloc1 00:05:35.863 12:50:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.863 12:50:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.863 12:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.863 12:50:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.864 12:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.124 /dev/nbd0 00:05:36.124 12:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.124 12:50:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.124 1+0 records in 00:05:36.124 1+0 records out 00:05:36.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239183 s, 17.1 MB/s 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.124 12:50:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.124 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.124 12:50:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.124 12:50:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.385 /dev/nbd1 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.385 1+0 records in 00:05:36.385 1+0 records out 00:05:36.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024341 s, 16.8 MB/s 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.385 12:50:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.385 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.645 { 00:05:36.645 "nbd_device": "/dev/nbd0", 00:05:36.645 "bdev_name": "Malloc0" 00:05:36.645 }, 00:05:36.645 { 00:05:36.645 "nbd_device": "/dev/nbd1", 00:05:36.645 "bdev_name": "Malloc1" 00:05:36.645 } 00:05:36.645 ]' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.645 { 00:05:36.645 "nbd_device": "/dev/nbd0", 00:05:36.645 "bdev_name": "Malloc0" 00:05:36.645 }, 00:05:36.645 { 00:05:36.645 "nbd_device": "/dev/nbd1", 00:05:36.645 "bdev_name": "Malloc1" 00:05:36.645 } 00:05:36.645 ]' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.645 /dev/nbd1' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.645 /dev/nbd1' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.645 12:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.646 256+0 records in 00:05:36.646 256+0 records out 00:05:36.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121217 s, 86.5 MB/s 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.646 256+0 records in 00:05:36.646 256+0 records out 00:05:36.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298631 s, 35.1 MB/s 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.646 256+0 records in 00:05:36.646 256+0 records out 00:05:36.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369921 s, 28.3 MB/s 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.646 12:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.905 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.165 12:50:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.165 12:50:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.426 12:50:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.426 [2024-07-15 12:50:59.223460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.686 [2024-07-15 12:50:59.287289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.686 [2024-07-15 12:50:59.287292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.686 [2024-07-15 12:50:59.318657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.686 [2024-07-15 12:50:59.318692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.988 spdk_app_start Round 1 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 463176 /var/tmp/spdk-nbd.sock 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 463176 ']' 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.988 Malloc0 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.988 Malloc1 00:05:40.988 12:51:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.988 /dev/nbd0 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.988 1+0 records in 00:05:40.988 1+0 records out 00:05:40.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211831 s, 19.3 MB/s 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.988 12:51:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.988 12:51:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.249 /dev/nbd1 00:05:41.249 12:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.249 12:51:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.249 12:51:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.250 1+0 records in 00:05:41.250 1+0 records out 00:05:41.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003284 s, 12.5 MB/s 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.250 12:51:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.250 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.250 12:51:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.250 12:51:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.250 12:51:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.250 12:51:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.510 { 00:05:41.510 "nbd_device": "/dev/nbd0", 00:05:41.510 "bdev_name": "Malloc0" 00:05:41.510 }, 00:05:41.510 { 00:05:41.510 "nbd_device": "/dev/nbd1", 00:05:41.510 "bdev_name": "Malloc1" 00:05:41.510 } 00:05:41.510 ]' 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.510 { 00:05:41.510 "nbd_device": "/dev/nbd0", 00:05:41.510 "bdev_name": "Malloc0" 00:05:41.510 }, 00:05:41.510 { 00:05:41.510 "nbd_device": "/dev/nbd1", 00:05:41.510 "bdev_name": "Malloc1" 00:05:41.510 } 00:05:41.510 ]' 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.510 /dev/nbd1' 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.510 /dev/nbd1' 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.510 12:51:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.511 256+0 records in 00:05:41.511 256+0 records out 00:05:41.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124864 s, 84.0 MB/s 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.511 256+0 records in 00:05:41.511 256+0 records out 00:05:41.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267108 s, 39.3 MB/s 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.511 256+0 records in 00:05:41.511 256+0 records out 00:05:41.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325886 s, 32.2 MB/s 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.511 12:51:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.771 12:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.030 12:51:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.030 12:51:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.290 12:51:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.290 [2024-07-15 12:51:04.084207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.550 [2024-07-15 12:51:04.148767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.550 [2024-07-15 12:51:04.148769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.550 [2024-07-15 12:51:04.180980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.550 [2024-07-15 12:51:04.181015] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.850 12:51:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.850 12:51:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.850 spdk_app_start Round 2 00:05:45.850 12:51:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 463176 /var/tmp/spdk-nbd.sock 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 463176 ']' 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.850 12:51:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.850 12:51:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.850 12:51:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.850 12:51:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.850 Malloc0 00:05:45.851 12:51:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.851 Malloc1 00:05:45.851 12:51:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.851 /dev/nbd0 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.851 1+0 records in 00:05:45.851 1+0 records out 00:05:45.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271064 s, 15.1 MB/s 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:45.851 12:51:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.851 12:51:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.112 /dev/nbd1 00:05:46.112 12:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.112 12:51:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.112 12:51:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.112 12:51:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.113 1+0 records in 00:05:46.113 1+0 records out 00:05:46.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282201 s, 14.5 MB/s 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.113 12:51:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.113 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.113 12:51:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.113 12:51:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.113 12:51:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.113 12:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.374 12:51:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.374 { 00:05:46.374 "nbd_device": "/dev/nbd0", 00:05:46.374 "bdev_name": "Malloc0" 00:05:46.374 }, 00:05:46.374 { 00:05:46.374 "nbd_device": "/dev/nbd1", 00:05:46.374 "bdev_name": "Malloc1" 00:05:46.374 } 00:05:46.374 ]' 00:05:46.374 12:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.374 { 00:05:46.374 "nbd_device": "/dev/nbd0", 00:05:46.375 "bdev_name": "Malloc0" 00:05:46.375 }, 00:05:46.375 { 00:05:46.375 "nbd_device": "/dev/nbd1", 00:05:46.375 "bdev_name": "Malloc1" 00:05:46.375 } 00:05:46.375 ]' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.375 /dev/nbd1' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.375 /dev/nbd1' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.375 12:51:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.375 256+0 records in 00:05:46.375 256+0 records out 00:05:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124976 s, 83.9 MB/s 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.375 256+0 records in 00:05:46.375 256+0 records out 00:05:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184762 s, 56.8 MB/s 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.375 256+0 records in 00:05:46.375 256+0 records out 00:05:46.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173698 s, 60.4 MB/s 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.375 12:51:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.637 12:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.898 12:51:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.898 12:51:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.214 12:51:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.214 [2024-07-15 12:51:08.887528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.214 [2024-07-15 12:51:08.952274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.214 [2024-07-15 12:51:08.952302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.214 [2024-07-15 12:51:08.983651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.214 [2024-07-15 12:51:08.983687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.565 12:51:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 463176 /var/tmp/spdk-nbd.sock 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 463176 ']' 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:50.565 12:51:11 event.app_repeat -- event/event.sh@39 -- # killprocess 463176 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 463176 ']' 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 463176 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 463176 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 463176' 00:05:50.565 killing process with pid 463176 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@967 -- # kill 463176 00:05:50.565 12:51:11 event.app_repeat -- common/autotest_common.sh@972 -- # wait 463176 00:05:50.565 spdk_app_start is called in Round 0. 00:05:50.565 Shutdown signal received, stop current app iteration 00:05:50.565 Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 reinitialization... 00:05:50.565 spdk_app_start is called in Round 1. 00:05:50.565 Shutdown signal received, stop current app iteration 00:05:50.565 Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 reinitialization... 00:05:50.565 spdk_app_start is called in Round 2. 00:05:50.565 Shutdown signal received, stop current app iteration 00:05:50.565 Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 reinitialization... 00:05:50.565 spdk_app_start is called in Round 3. 00:05:50.565 Shutdown signal received, stop current app iteration 00:05:50.565 12:51:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.565 12:51:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.565 00:05:50.565 real 0m15.597s 00:05:50.565 user 0m33.652s 00:05:50.565 sys 0m2.140s 00:05:50.565 12:51:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.565 12:51:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 ************************************ 00:05:50.565 END TEST app_repeat 00:05:50.565 ************************************ 00:05:50.565 12:51:12 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.565 12:51:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.565 12:51:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.565 12:51:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.565 12:51:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.565 12:51:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 ************************************ 00:05:50.565 START TEST cpu_locks 00:05:50.565 ************************************ 00:05:50.565 12:51:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.565 * Looking for test storage... 00:05:50.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.565 12:51:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.565 12:51:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.565 12:51:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.565 12:51:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.565 12:51:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.565 12:51:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.565 12:51:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 ************************************ 00:05:50.565 START TEST default_locks 00:05:50.565 ************************************ 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=466983 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 466983 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 466983 ']' 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.565 12:51:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.565 [2024-07-15 12:51:12.367964] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:50.565 [2024-07-15 12:51:12.368027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466983 ] 00:05:50.825 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.825 [2024-07-15 12:51:12.438568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.825 [2024-07-15 12:51:12.512614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.396 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.396 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:51.397 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 466983 00:05:51.397 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 466983 00:05:51.397 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.656 lslocks: write error 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 466983 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 466983 ']' 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 466983 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 466983 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 466983' 00:05:51.656 killing process with pid 466983 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 466983 00:05:51.656 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 466983 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 466983 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 466983 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 466983 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 466983 ']' 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (466983) - No such process 00:05:51.917 ERROR: process (pid: 466983) is no longer running 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:51.917 00:05:51.917 real 0m1.224s 00:05:51.917 user 0m1.299s 00:05:51.917 sys 0m0.389s 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.917 12:51:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 ************************************ 00:05:51.917 END TEST default_locks 00:05:51.917 ************************************ 00:05:51.917 12:51:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:51.917 12:51:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:51.917 12:51:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.917 12:51:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.917 12:51:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 ************************************ 00:05:51.917 START TEST default_locks_via_rpc 00:05:51.917 ************************************ 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=467343 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 467343 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 467343 ']' 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.917 12:51:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.917 [2024-07-15 12:51:13.673916] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:51.917 [2024-07-15 12:51:13.673966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467343 ] 00:05:51.917 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.917 [2024-07-15 12:51:13.741498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.176 [2024-07-15 12:51:13.805974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 467343 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 467343 00:05:52.747 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 467343 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 467343 ']' 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 467343 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467343 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467343' 00:05:53.317 killing process with pid 467343 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 467343 00:05:53.317 12:51:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 467343 00:05:53.317 00:05:53.317 real 0m1.520s 00:05:53.317 user 0m1.639s 00:05:53.317 sys 0m0.477s 00:05:53.317 12:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.317 12:51:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.317 ************************************ 00:05:53.317 END TEST default_locks_via_rpc 00:05:53.317 ************************************ 00:05:53.577 12:51:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:53.577 12:51:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.577 12:51:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.577 12:51:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.577 12:51:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.577 ************************************ 00:05:53.577 START TEST non_locking_app_on_locked_coremask 00:05:53.577 ************************************ 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=467710 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 467710 /var/tmp/spdk.sock 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 467710 ']' 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.577 12:51:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.578 [2024-07-15 12:51:15.260588] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:53.578 [2024-07-15 12:51:15.260649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467710 ] 00:05:53.578 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.578 [2024-07-15 12:51:15.329630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.578 [2024-07-15 12:51:15.403478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=467739 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 467739 /var/tmp/spdk2.sock 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 467739 ']' 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.517 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.517 [2024-07-15 12:51:16.056162] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:54.517 [2024-07-15 12:51:16.056214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467739 ] 00:05:54.517 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.517 [2024-07-15 12:51:16.154632] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.517 [2024-07-15 12:51:16.154662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.517 [2024-07-15 12:51:16.288279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.088 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.088 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.088 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 467710 00:05:55.088 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 467710 00:05:55.088 12:51:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.656 lslocks: write error 00:05:55.656 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 467710 00:05:55.656 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 467710 ']' 00:05:55.656 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 467710 00:05:55.656 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.656 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467710 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467710' 00:05:55.657 killing process with pid 467710 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 467710 00:05:55.657 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 467710 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 467739 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 467739 ']' 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 467739 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 467739 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 467739' 00:05:56.227 killing process with pid 467739 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 467739 00:05:56.227 12:51:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 467739 00:05:56.489 00:05:56.489 real 0m2.932s 00:05:56.489 user 0m3.190s 00:05:56.489 sys 0m0.878s 00:05:56.489 12:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.489 12:51:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.489 ************************************ 00:05:56.489 END TEST non_locking_app_on_locked_coremask 00:05:56.489 ************************************ 00:05:56.489 12:51:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.489 12:51:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.489 12:51:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.489 12:51:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.489 12:51:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.489 ************************************ 00:05:56.489 START TEST locking_app_on_unlocked_coremask 00:05:56.489 ************************************ 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=468339 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 468339 /var/tmp/spdk.sock 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 468339 ']' 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.489 12:51:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.489 [2024-07-15 12:51:18.273522] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:56.489 [2024-07-15 12:51:18.273576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468339 ] 00:05:56.489 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.749 [2024-07-15 12:51:18.342714] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.749 [2024-07-15 12:51:18.342750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.749 [2024-07-15 12:51:18.411488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=468434 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 468434 /var/tmp/spdk2.sock 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 468434 ']' 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.321 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.582 [2024-07-15 12:51:19.151518] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:57.582 [2024-07-15 12:51:19.151571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468434 ] 00:05:57.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.582 [2024-07-15 12:51:19.256525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.582 [2024-07-15 12:51:19.387297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.156 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.156 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.156 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 468434 00:05:58.156 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 468434 00:05:58.156 12:51:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.726 lslocks: write error 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 468339 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 468339 ']' 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 468339 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.726 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 468339 00:05:58.727 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.727 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.727 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 468339' 00:05:58.727 killing process with pid 468339 00:05:58.727 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 468339 00:05:58.727 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 468339 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 468434 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 468434 ']' 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 468434 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 468434 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 468434' 00:05:59.303 killing process with pid 468434 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 468434 00:05:59.303 12:51:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 468434 00:05:59.565 00:05:59.565 real 0m2.953s 00:05:59.565 user 0m3.274s 00:05:59.565 sys 0m0.868s 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.565 ************************************ 00:05:59.565 END TEST locking_app_on_unlocked_coremask 00:05:59.565 ************************************ 00:05:59.565 12:51:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.565 12:51:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.565 12:51:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.565 12:51:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.565 12:51:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.565 ************************************ 00:05:59.565 START TEST locking_app_on_locked_coremask 00:05:59.565 ************************************ 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=468874 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 468874 /var/tmp/spdk.sock 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 468874 ']' 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.565 12:51:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.566 [2024-07-15 12:51:21.293638] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:05:59.566 [2024-07-15 12:51:21.293691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468874 ] 00:05:59.566 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.566 [2024-07-15 12:51:21.360696] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.825 [2024-07-15 12:51:21.430266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=469144 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 469144 /var/tmp/spdk2.sock 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 469144 /var/tmp/spdk2.sock 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 469144 /var/tmp/spdk2.sock 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 469144 ']' 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.397 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.397 [2024-07-15 12:51:22.085378] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:00.397 [2024-07-15 12:51:22.085429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469144 ] 00:06:00.397 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.397 [2024-07-15 12:51:22.183655] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 468874 has claimed it. 00:06:00.397 [2024-07-15 12:51:22.183695] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (469144) - No such process 00:06:00.969 ERROR: process (pid: 469144) is no longer running 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 468874 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 468874 00:06:00.969 12:51:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.542 lslocks: write error 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 468874 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 468874 ']' 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 468874 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 468874 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 468874' 00:06:01.542 killing process with pid 468874 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 468874 00:06:01.542 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 468874 00:06:01.803 00:06:01.803 real 0m2.193s 00:06:01.803 user 0m2.383s 00:06:01.803 sys 0m0.627s 00:06:01.803 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.803 12:51:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.803 ************************************ 00:06:01.803 END TEST locking_app_on_locked_coremask 00:06:01.803 ************************************ 00:06:01.803 12:51:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.803 12:51:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.803 12:51:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.803 12:51:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.803 12:51:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.803 ************************************ 00:06:01.803 START TEST locking_overlapped_coremask 00:06:01.803 ************************************ 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=469504 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 469504 /var/tmp/spdk.sock 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 469504 ']' 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.803 12:51:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.803 [2024-07-15 12:51:23.568500] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:01.804 [2024-07-15 12:51:23.568559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469504 ] 00:06:01.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.063 [2024-07-15 12:51:23.638053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.063 [2024-07-15 12:51:23.711805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.063 [2024-07-15 12:51:23.711925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.063 [2024-07-15 12:51:23.711928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=469521 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 469521 /var/tmp/spdk2.sock 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 469521 /var/tmp/spdk2.sock 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 469521 /var/tmp/spdk2.sock 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 469521 ']' 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.634 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.634 [2024-07-15 12:51:24.395909] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:02.634 [2024-07-15 12:51:24.395962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469521 ] 00:06:02.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.896 [2024-07-15 12:51:24.477358] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 469504 has claimed it. 00:06:02.896 [2024-07-15 12:51:24.477390] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (469521) - No such process 00:06:03.469 ERROR: process (pid: 469521) is no longer running 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 469504 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 469504 ']' 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 469504 00:06:03.469 12:51:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469504 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469504' 00:06:03.469 killing process with pid 469504 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 469504 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 469504 00:06:03.469 00:06:03.469 real 0m1.764s 00:06:03.469 user 0m4.941s 00:06:03.469 sys 0m0.393s 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.469 12:51:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.469 ************************************ 00:06:03.469 END TEST locking_overlapped_coremask 00:06:03.469 ************************************ 00:06:03.730 12:51:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:03.730 12:51:25 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.730 12:51:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.730 12:51:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.730 12:51:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.730 ************************************ 00:06:03.730 START TEST locking_overlapped_coremask_via_rpc 00:06:03.730 ************************************ 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=469879 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 469879 /var/tmp/spdk.sock 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 469879 ']' 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.730 12:51:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.730 [2024-07-15 12:51:25.393999] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:03.730 [2024-07-15 12:51:25.394049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469879 ] 00:06:03.730 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.730 [2024-07-15 12:51:25.461720] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.730 [2024-07-15 12:51:25.461750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.730 [2024-07-15 12:51:25.533937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.730 [2024-07-15 12:51:25.534052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.730 [2024-07-15 12:51:25.534055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=469903 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 469903 /var/tmp/spdk2.sock 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 469903 ']' 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.675 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.675 [2024-07-15 12:51:26.225215] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:04.675 [2024-07-15 12:51:26.225276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469903 ] 00:06:04.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.675 [2024-07-15 12:51:26.304557] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.675 [2024-07-15 12:51:26.304580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.675 [2024-07-15 12:51:26.410267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.675 [2024-07-15 12:51:26.413352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.675 [2024-07-15 12:51:26.413354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.249 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.250 12:51:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.250 [2024-07-15 12:51:27.001297] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 469879 has claimed it. 00:06:05.250 request: 00:06:05.250 { 00:06:05.250 "method": "framework_enable_cpumask_locks", 00:06:05.250 "req_id": 1 00:06:05.250 } 00:06:05.250 Got JSON-RPC error response 00:06:05.250 response: 00:06:05.250 { 00:06:05.250 "code": -32603, 00:06:05.250 "message": "Failed to claim CPU core: 2" 00:06:05.250 } 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 469879 /var/tmp/spdk.sock 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 469879 ']' 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.250 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 469903 /var/tmp/spdk2.sock 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 469903 ']' 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.511 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.772 00:06:05.772 real 0m2.004s 00:06:05.772 user 0m0.768s 00:06:05.772 sys 0m0.160s 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.772 12:51:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.772 ************************************ 00:06:05.772 END TEST locking_overlapped_coremask_via_rpc 00:06:05.772 ************************************ 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.772 12:51:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.772 12:51:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 469879 ]] 00:06:05.772 12:51:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 469879 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 469879 ']' 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 469879 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469879 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469879' 00:06:05.772 killing process with pid 469879 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 469879 00:06:05.772 12:51:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 469879 00:06:06.033 12:51:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 469903 ]] 00:06:06.033 12:51:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 469903 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 469903 ']' 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 469903 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469903 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469903' 00:06:06.033 killing process with pid 469903 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 469903 00:06:06.033 12:51:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 469903 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 469879 ]] 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 469879 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 469879 ']' 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 469879 00:06:06.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (469879) - No such process 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 469879 is not found' 00:06:06.294 Process with pid 469879 is not found 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 469903 ]] 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 469903 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 469903 ']' 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 469903 00:06:06.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (469903) - No such process 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 469903 is not found' 00:06:06.294 Process with pid 469903 is not found 00:06:06.294 12:51:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.294 00:06:06.294 real 0m15.731s 00:06:06.294 user 0m27.023s 00:06:06.294 sys 0m4.663s 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.294 12:51:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.294 ************************************ 00:06:06.294 END TEST cpu_locks 00:06:06.294 ************************************ 00:06:06.294 12:51:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.294 00:06:06.294 real 0m41.335s 00:06:06.294 user 1m20.983s 00:06:06.294 sys 0m7.766s 00:06:06.294 12:51:27 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.295 12:51:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.295 ************************************ 00:06:06.295 END TEST event 00:06:06.295 ************************************ 00:06:06.295 12:51:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.295 12:51:27 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.295 12:51:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.295 12:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.295 12:51:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.295 ************************************ 00:06:06.295 START TEST thread 00:06:06.295 ************************************ 00:06:06.295 12:51:28 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.295 * Looking for test storage... 00:06:06.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:06.295 12:51:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.295 12:51:28 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:06.295 12:51:28 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.295 12:51:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.555 ************************************ 00:06:06.555 START TEST thread_poller_perf 00:06:06.555 ************************************ 00:06:06.555 12:51:28 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.555 [2024-07-15 12:51:28.176605] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:06.556 [2024-07-15 12:51:28.176719] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470437 ] 00:06:06.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.556 [2024-07-15 12:51:28.252855] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.556 [2024-07-15 12:51:28.328564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.556 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.942 ====================================== 00:06:07.942 busy:2409328312 (cyc) 00:06:07.942 total_run_count: 287000 00:06:07.942 tsc_hz: 2400000000 (cyc) 00:06:07.942 ====================================== 00:06:07.942 poller_cost: 8394 (cyc), 3497 (nsec) 00:06:07.942 00:06:07.942 real 0m1.236s 00:06:07.942 user 0m1.142s 00:06:07.942 sys 0m0.089s 00:06:07.942 12:51:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.942 12:51:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.942 ************************************ 00:06:07.942 END TEST thread_poller_perf 00:06:07.942 ************************************ 00:06:07.942 12:51:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:07.942 12:51:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.942 12:51:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:07.942 12:51:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.942 12:51:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.942 ************************************ 00:06:07.942 START TEST thread_poller_perf 00:06:07.942 ************************************ 00:06:07.942 12:51:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.942 [2024-07-15 12:51:29.481373] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:07.942 [2024-07-15 12:51:29.481474] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470687 ] 00:06:07.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.942 [2024-07-15 12:51:29.552242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.942 [2024-07-15 12:51:29.617271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.942 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.886 ====================================== 00:06:08.886 busy:2402005906 (cyc) 00:06:08.886 total_run_count: 3756000 00:06:08.886 tsc_hz: 2400000000 (cyc) 00:06:08.886 ====================================== 00:06:08.886 poller_cost: 639 (cyc), 266 (nsec) 00:06:08.886 00:06:08.886 real 0m1.213s 00:06:08.886 user 0m1.132s 00:06:08.886 sys 0m0.077s 00:06:08.886 12:51:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.886 12:51:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.886 ************************************ 00:06:08.886 END TEST thread_poller_perf 00:06:08.886 ************************************ 00:06:08.886 12:51:30 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:08.886 12:51:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.886 00:06:08.886 real 0m2.694s 00:06:08.886 user 0m2.370s 00:06:08.886 sys 0m0.330s 00:06:08.886 12:51:30 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.886 12:51:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.886 ************************************ 00:06:08.886 END TEST thread 00:06:08.886 ************************************ 00:06:09.147 12:51:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.147 12:51:30 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.147 12:51:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.147 12:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.148 12:51:30 -- common/autotest_common.sh@10 -- # set +x 00:06:09.148 ************************************ 00:06:09.148 START TEST accel 00:06:09.148 ************************************ 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.148 * Looking for test storage... 00:06:09.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:09.148 12:51:30 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:09.148 12:51:30 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:09.148 12:51:30 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.148 12:51:30 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=471074 00:06:09.148 12:51:30 accel -- accel/accel.sh@63 -- # waitforlisten 471074 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@829 -- # '[' -z 471074 ']' 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.148 12:51:30 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.148 12:51:30 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.148 12:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.148 12:51:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.148 12:51:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.148 12:51:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.148 12:51:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.148 12:51:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.148 12:51:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.148 12:51:30 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.148 [2024-07-15 12:51:30.939582] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:09.148 [2024-07-15 12:51:30.939653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471074 ] 00:06:09.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.409 [2024-07-15 12:51:31.012955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.409 [2024-07-15 12:51:31.085976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@862 -- # return 0 00:06:09.982 12:51:31 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:09.982 12:51:31 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:09.982 12:51:31 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:09.982 12:51:31 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:09.982 12:51:31 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:09.982 12:51:31 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:09.982 12:51:31 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:09.982 12:51:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:09.982 12:51:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:09.982 12:51:31 accel -- accel/accel.sh@75 -- # killprocess 471074 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@948 -- # '[' -z 471074 ']' 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@952 -- # kill -0 471074 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@953 -- # uname 00:06:09.982 12:51:31 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.983 12:51:31 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 471074 00:06:10.244 12:51:31 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.244 12:51:31 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.244 12:51:31 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 471074' 00:06:10.244 killing process with pid 471074 00:06:10.244 12:51:31 accel -- common/autotest_common.sh@967 -- # kill 471074 00:06:10.244 12:51:31 accel -- common/autotest_common.sh@972 -- # wait 471074 00:06:10.244 12:51:32 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:10.244 12:51:32 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:10.244 12:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:10.244 12:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.244 12:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 12:51:32 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:10.506 12:51:32 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:10.506 12:51:32 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.506 12:51:32 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 12:51:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.506 12:51:32 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:10.506 12:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.506 12:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.506 12:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 ************************************ 00:06:10.506 START TEST accel_missing_filename 00:06:10.506 ************************************ 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.506 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:10.506 12:51:32 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:10.506 [2024-07-15 12:51:32.216222] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:10.506 [2024-07-15 12:51:32.216331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471448 ] 00:06:10.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.506 [2024-07-15 12:51:32.285611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.768 [2024-07-15 12:51:32.349671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.768 [2024-07-15 12:51:32.381495] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.768 [2024-07-15 12:51:32.418491] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:10.768 A filename is required. 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.768 00:06:10.768 real 0m0.286s 00:06:10.768 user 0m0.219s 00:06:10.768 sys 0m0.106s 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.768 12:51:32 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:10.768 ************************************ 00:06:10.768 END TEST accel_missing_filename 00:06:10.768 ************************************ 00:06:10.768 12:51:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.768 12:51:32 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.768 12:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:10.768 12:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.768 12:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.768 ************************************ 00:06:10.768 START TEST accel_compress_verify 00:06:10.768 ************************************ 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.768 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:10.768 12:51:32 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:10.768 [2024-07-15 12:51:32.579175] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:10.768 [2024-07-15 12:51:32.579287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471470 ] 00:06:11.029 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.029 [2024-07-15 12:51:32.648129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.029 [2024-07-15 12:51:32.712724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.029 [2024-07-15 12:51:32.744534] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.029 [2024-07-15 12:51:32.781688] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:11.029 00:06:11.029 Compression does not support the verify option, aborting. 00:06:11.029 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:11.029 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.030 00:06:11.030 real 0m0.287s 00:06:11.030 user 0m0.219s 00:06:11.030 sys 0m0.109s 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.030 12:51:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:11.030 ************************************ 00:06:11.030 END TEST accel_compress_verify 00:06:11.030 ************************************ 00:06:11.291 12:51:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.291 12:51:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:11.291 12:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.291 12:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.291 12:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 ************************************ 00:06:11.291 START TEST accel_wrong_workload 00:06:11.291 ************************************ 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.291 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:11.291 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:11.291 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:11.291 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.291 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.291 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.292 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.292 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.292 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:11.292 12:51:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:11.292 Unsupported workload type: foobar 00:06:11.292 [2024-07-15 12:51:32.938356] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:11.292 accel_perf options: 00:06:11.292 [-h help message] 00:06:11.292 [-q queue depth per core] 00:06:11.292 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:11.292 [-T number of threads per core 00:06:11.292 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:11.292 [-t time in seconds] 00:06:11.292 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:11.292 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:11.292 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:11.292 [-l for compress/decompress workloads, name of uncompressed input file 00:06:11.292 [-S for crc32c workload, use this seed value (default 0) 00:06:11.292 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:11.292 [-f for fill workload, use this BYTE value (default 255) 00:06:11.292 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:11.292 [-y verify result if this switch is on] 00:06:11.292 [-a tasks to allocate per core (default: same value as -q)] 00:06:11.292 Can be used to spread operations across a wider range of memory. 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.292 00:06:11.292 real 0m0.034s 00:06:11.292 user 0m0.017s 00:06:11.292 sys 0m0.017s 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.292 12:51:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 END TEST accel_wrong_workload 00:06:11.292 ************************************ 00:06:11.292 Error: writing output failed: Broken pipe 00:06:11.292 12:51:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.292 12:51:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:11.292 12:51:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:11.292 12:51:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.292 12:51:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 START TEST accel_negative_buffers 00:06:11.292 ************************************ 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:11.292 12:51:33 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:11.292 -x option must be non-negative. 00:06:11.292 [2024-07-15 12:51:33.048935] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:11.292 accel_perf options: 00:06:11.292 [-h help message] 00:06:11.292 [-q queue depth per core] 00:06:11.292 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:11.292 [-T number of threads per core 00:06:11.292 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:11.292 [-t time in seconds] 00:06:11.292 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:11.292 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:11.292 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:11.292 [-l for compress/decompress workloads, name of uncompressed input file 00:06:11.292 [-S for crc32c workload, use this seed value (default 0) 00:06:11.292 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:11.292 [-f for fill workload, use this BYTE value (default 255) 00:06:11.292 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:11.292 [-y verify result if this switch is on] 00:06:11.292 [-a tasks to allocate per core (default: same value as -q)] 00:06:11.292 Can be used to spread operations across a wider range of memory. 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.292 00:06:11.292 real 0m0.037s 00:06:11.292 user 0m0.022s 00:06:11.292 sys 0m0.014s 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.292 12:51:33 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 END TEST accel_negative_buffers 00:06:11.292 ************************************ 00:06:11.292 Error: writing output failed: Broken pipe 00:06:11.292 12:51:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.292 12:51:33 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:11.292 12:51:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:11.292 12:51:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.292 12:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.554 ************************************ 00:06:11.554 START TEST accel_crc32c 00:06:11.554 ************************************ 00:06:11.554 12:51:33 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:11.554 [2024-07-15 12:51:33.161349] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:11.554 [2024-07-15 12:51:33.161431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471583 ] 00:06:11.554 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.554 [2024-07-15 12:51:33.233125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.554 [2024-07-15 12:51:33.306826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.554 12:51:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:12.940 12:51:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.940 00:06:12.940 real 0m1.303s 00:06:12.940 user 0m1.201s 00:06:12.940 sys 0m0.114s 00:06:12.940 12:51:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.940 12:51:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.940 ************************************ 00:06:12.940 END TEST accel_crc32c 00:06:12.940 ************************************ 00:06:12.940 12:51:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.940 12:51:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:12.940 12:51:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.940 12:51:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.940 12:51:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.940 ************************************ 00:06:12.940 START TEST accel_crc32c_C2 00:06:12.940 ************************************ 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.940 [2024-07-15 12:51:34.538433] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:12.940 [2024-07-15 12:51:34.538499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471889 ] 00:06:12.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.940 [2024-07-15 12:51:34.608189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.940 [2024-07-15 12:51:34.678258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.940 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.941 12:51:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:14.323 12:51:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.323 00:06:14.323 real 0m1.297s 00:06:14.323 user 0m1.199s 00:06:14.324 sys 0m0.109s 00:06:14.324 12:51:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.324 12:51:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 ************************************ 00:06:14.324 END TEST accel_crc32c_C2 00:06:14.324 ************************************ 00:06:14.324 12:51:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.324 12:51:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:14.324 12:51:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.324 12:51:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.324 12:51:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.324 ************************************ 00:06:14.324 START TEST accel_copy 00:06:14.324 ************************************ 00:06:14.324 12:51:35 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:14.324 12:51:35 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:14.324 [2024-07-15 12:51:35.913316] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:14.324 [2024-07-15 12:51:35.913421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472241 ] 00:06:14.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.324 [2024-07-15 12:51:35.986926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.324 [2024-07-15 12:51:36.057458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.324 12:51:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:15.703 12:51:37 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.703 00:06:15.703 real 0m1.305s 00:06:15.703 user 0m1.200s 00:06:15.703 sys 0m0.115s 00:06:15.703 12:51:37 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.703 12:51:37 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:15.703 ************************************ 00:06:15.703 END TEST accel_copy 00:06:15.703 ************************************ 00:06:15.703 12:51:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.703 12:51:37 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.703 12:51:37 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:15.703 12:51:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.703 12:51:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.703 ************************************ 00:06:15.703 START TEST accel_fill 00:06:15.703 ************************************ 00:06:15.703 12:51:37 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:15.703 [2024-07-15 12:51:37.290481] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:15.703 [2024-07-15 12:51:37.290578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472595 ] 00:06:15.703 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.703 [2024-07-15 12:51:37.360180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.703 [2024-07-15 12:51:37.429404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:15.703 12:51:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:17.088 12:51:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.088 00:06:17.088 real 0m1.298s 00:06:17.088 user 0m1.201s 00:06:17.088 sys 0m0.108s 00:06:17.088 12:51:38 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.088 12:51:38 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:17.088 ************************************ 00:06:17.088 END TEST accel_fill 00:06:17.088 ************************************ 00:06:17.088 12:51:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.088 12:51:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:17.088 12:51:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:17.088 12:51:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.088 12:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.088 ************************************ 00:06:17.088 START TEST accel_copy_crc32c 00:06:17.088 ************************************ 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.088 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:17.089 [2024-07-15 12:51:38.663071] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:17.089 [2024-07-15 12:51:38.663138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472865 ] 00:06:17.089 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.089 [2024-07-15 12:51:38.732334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.089 [2024-07-15 12:51:38.802044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:17.089 12:51:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.121 00:06:18.121 real 0m1.298s 00:06:18.121 user 0m1.199s 00:06:18.121 sys 0m0.112s 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.121 12:51:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:18.121 ************************************ 00:06:18.121 END TEST accel_copy_crc32c 00:06:18.121 ************************************ 00:06:18.382 12:51:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.382 12:51:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.382 12:51:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.382 12:51:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.382 12:51:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.382 ************************************ 00:06:18.382 START TEST accel_copy_crc32c_C2 00:06:18.382 ************************************ 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:18.382 [2024-07-15 12:51:40.035947] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:18.382 [2024-07-15 12:51:40.036037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473056 ] 00:06:18.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.382 [2024-07-15 12:51:40.104808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.382 [2024-07-15 12:51:40.173005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.382 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:18.653 12:51:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.594 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.595 00:06:19.595 real 0m1.296s 00:06:19.595 user 0m1.202s 00:06:19.595 sys 0m0.106s 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.595 12:51:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:19.595 ************************************ 00:06:19.595 END TEST accel_copy_crc32c_C2 00:06:19.595 ************************************ 00:06:19.595 12:51:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.595 12:51:41 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:19.595 12:51:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:19.595 12:51:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.595 12:51:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.595 ************************************ 00:06:19.595 START TEST accel_dualcast 00:06:19.595 ************************************ 00:06:19.595 12:51:41 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:19.595 12:51:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:19.595 [2024-07-15 12:51:41.407598] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:19.595 [2024-07-15 12:51:41.407692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473337 ] 00:06:19.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.856 [2024-07-15 12:51:41.476896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.856 [2024-07-15 12:51:41.543600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:19.856 12:51:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:21.239 12:51:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.239 00:06:21.239 real 0m1.297s 00:06:21.239 user 0m1.199s 00:06:21.239 sys 0m0.107s 00:06:21.239 12:51:42 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.239 12:51:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:21.239 ************************************ 00:06:21.239 END TEST accel_dualcast 00:06:21.239 ************************************ 00:06:21.239 12:51:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.239 12:51:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:21.239 12:51:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.239 12:51:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.239 12:51:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.239 ************************************ 00:06:21.239 START TEST accel_compare 00:06:21.239 ************************************ 00:06:21.239 12:51:42 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:21.239 [2024-07-15 12:51:42.776088] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:21.239 [2024-07-15 12:51:42.776183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473684 ] 00:06:21.239 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.239 [2024-07-15 12:51:42.844676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.239 [2024-07-15 12:51:42.913024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.239 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:21.240 12:51:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:22.632 12:51:44 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.632 00:06:22.632 real 0m1.296s 00:06:22.632 user 0m1.199s 00:06:22.632 sys 0m0.106s 00:06:22.632 12:51:44 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.632 12:51:44 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:22.632 ************************************ 00:06:22.632 END TEST accel_compare 00:06:22.632 ************************************ 00:06:22.632 12:51:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.632 12:51:44 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:22.632 12:51:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.632 12:51:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.632 12:51:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.632 ************************************ 00:06:22.632 START TEST accel_xor 00:06:22.632 ************************************ 00:06:22.632 12:51:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:22.632 12:51:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:22.633 [2024-07-15 12:51:44.145783] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:22.633 [2024-07-15 12:51:44.145849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474033 ] 00:06:22.633 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.633 [2024-07-15 12:51:44.213694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.633 [2024-07-15 12:51:44.277855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:22.633 12:51:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.014 00:06:24.014 real 0m1.290s 00:06:24.014 user 0m1.207s 00:06:24.014 sys 0m0.094s 00:06:24.014 12:51:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.014 12:51:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:24.014 ************************************ 00:06:24.014 END TEST accel_xor 00:06:24.014 ************************************ 00:06:24.014 12:51:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.014 12:51:45 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:24.014 12:51:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.014 12:51:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.014 12:51:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.014 ************************************ 00:06:24.014 START TEST accel_xor 00:06:24.014 ************************************ 00:06:24.014 12:51:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:24.014 [2024-07-15 12:51:45.512531] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:24.014 [2024-07-15 12:51:45.512626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474305 ] 00:06:24.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.014 [2024-07-15 12:51:45.583635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.014 [2024-07-15 12:51:45.656035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.014 12:51:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.015 12:51:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.958 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.218 12:51:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.218 00:06:25.218 real 0m1.303s 00:06:25.218 user 0m1.214s 00:06:25.218 sys 0m0.102s 00:06:25.218 12:51:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.218 12:51:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:25.218 ************************************ 00:06:25.218 END TEST accel_xor 00:06:25.218 ************************************ 00:06:25.218 12:51:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.218 12:51:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:25.218 12:51:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:25.218 12:51:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.218 12:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.218 ************************************ 00:06:25.218 START TEST accel_dif_verify 00:06:25.218 ************************************ 00:06:25.218 12:51:46 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:25.218 12:51:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:25.218 [2024-07-15 12:51:46.889594] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:25.218 [2024-07-15 12:51:46.889665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474502 ] 00:06:25.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.218 [2024-07-15 12:51:46.955539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.218 [2024-07-15 12:51:47.019610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.479 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:25.480 12:51:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:26.422 12:51:48 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.422 00:06:26.422 real 0m1.288s 00:06:26.422 user 0m1.196s 00:06:26.422 sys 0m0.105s 00:06:26.422 12:51:48 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.422 12:51:48 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:26.422 ************************************ 00:06:26.422 END TEST accel_dif_verify 00:06:26.422 ************************************ 00:06:26.422 12:51:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.422 12:51:48 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:26.422 12:51:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:26.422 12:51:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.422 12:51:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.422 ************************************ 00:06:26.422 START TEST accel_dif_generate 00:06:26.422 ************************************ 00:06:26.422 12:51:48 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:26.422 12:51:48 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:26.683 [2024-07-15 12:51:48.250537] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:26.683 [2024-07-15 12:51:48.250601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474772 ] 00:06:26.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.683 [2024-07-15 12:51:48.318101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.683 [2024-07-15 12:51:48.384620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:26.683 12:51:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:28.065 12:51:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.065 00:06:28.065 real 0m1.291s 00:06:28.065 user 0m1.202s 00:06:28.065 sys 0m0.103s 00:06:28.065 12:51:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.065 12:51:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:28.065 ************************************ 00:06:28.065 END TEST accel_dif_generate 00:06:28.065 ************************************ 00:06:28.065 12:51:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.065 12:51:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.065 12:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.065 12:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.065 12:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.065 ************************************ 00:06:28.065 START TEST accel_dif_generate_copy 00:06:28.065 ************************************ 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:28.065 [2024-07-15 12:51:49.617373] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:28.065 [2024-07-15 12:51:49.617465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475125 ] 00:06:28.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.065 [2024-07-15 12:51:49.685597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.065 [2024-07-15 12:51:49.750973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.065 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:28.066 12:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.451 00:06:29.451 real 0m1.292s 00:06:29.451 user 0m1.203s 00:06:29.451 sys 0m0.101s 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.451 12:51:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.451 ************************************ 00:06:29.451 END TEST accel_dif_generate_copy 00:06:29.451 ************************************ 00:06:29.451 12:51:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.451 12:51:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:29.451 12:51:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.451 12:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:29.451 12:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.451 12:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.451 ************************************ 00:06:29.451 START TEST accel_comp 00:06:29.451 ************************************ 00:06:29.451 12:51:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:29.451 12:51:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:29.451 [2024-07-15 12:51:50.986840] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:29.451 [2024-07-15 12:51:50.986936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475474 ] 00:06:29.451 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.451 [2024-07-15 12:51:51.055321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.451 [2024-07-15 12:51:51.119596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:29.451 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:29.452 12:51:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:30.838 12:51:52 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.838 00:06:30.838 real 0m1.295s 00:06:30.838 user 0m1.198s 00:06:30.838 sys 0m0.109s 00:06:30.838 12:51:52 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.838 12:51:52 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:30.838 ************************************ 00:06:30.838 END TEST accel_comp 00:06:30.838 ************************************ 00:06:30.838 12:51:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.838 12:51:52 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.838 12:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:30.838 12:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.838 12:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.838 ************************************ 00:06:30.838 START TEST accel_decomp 00:06:30.838 ************************************ 00:06:30.839 12:51:52 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:30.839 [2024-07-15 12:51:52.355092] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:30.839 [2024-07-15 12:51:52.355156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475774 ] 00:06:30.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.839 [2024-07-15 12:51:52.422811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.839 [2024-07-15 12:51:52.487756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:30.839 12:51:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.223 12:51:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.223 00:06:32.223 real 0m1.293s 00:06:32.223 user 0m1.202s 00:06:32.223 sys 0m0.103s 00:06:32.223 12:51:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.223 12:51:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 END TEST accel_decomp 00:06:32.223 ************************************ 00:06:32.223 12:51:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.223 12:51:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.223 12:51:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:32.223 12:51:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.223 12:51:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 START TEST accel_decomp_full 00:06:32.223 ************************************ 00:06:32.223 12:51:53 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:32.223 12:51:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:32.223 [2024-07-15 12:51:53.725324] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:32.224 [2024-07-15 12:51:53.725422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475968 ] 00:06:32.224 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.224 [2024-07-15 12:51:53.795412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.224 [2024-07-15 12:51:53.865568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:32.224 12:51:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.187 12:51:55 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.187 00:06:33.187 real 0m1.314s 00:06:33.187 user 0m1.216s 00:06:33.187 sys 0m0.111s 00:06:33.187 12:51:55 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.187 12:51:55 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:33.187 ************************************ 00:06:33.187 END TEST accel_decomp_full 00:06:33.187 ************************************ 00:06:33.448 12:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.448 12:51:55 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.448 12:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:33.448 12:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.448 12:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.448 ************************************ 00:06:33.448 START TEST accel_decomp_mcore 00:06:33.448 ************************************ 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:33.448 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:33.448 [2024-07-15 12:51:55.111315] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:33.448 [2024-07-15 12:51:55.111402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476213 ] 00:06:33.448 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.448 [2024-07-15 12:51:55.180826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.448 [2024-07-15 12:51:55.249288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.448 [2024-07-15 12:51:55.249407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.448 [2024-07-15 12:51:55.249572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.448 [2024-07-15 12:51:55.249573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.709 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:33.710 12:51:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.652 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.653 00:06:34.653 real 0m1.305s 00:06:34.653 user 0m4.441s 00:06:34.653 sys 0m0.108s 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.653 12:51:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:34.653 ************************************ 00:06:34.653 END TEST accel_decomp_mcore 00:06:34.653 ************************************ 00:06:34.653 12:51:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.653 12:51:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:34.653 12:51:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:34.653 12:51:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.653 12:51:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.653 ************************************ 00:06:34.653 START TEST accel_decomp_full_mcore 00:06:34.653 ************************************ 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:34.653 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:34.915 [2024-07-15 12:51:56.490181] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:34.915 [2024-07-15 12:51:56.490321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476565 ] 00:06:34.915 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.915 [2024-07-15 12:51:56.559816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.915 [2024-07-15 12:51:56.627985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.915 [2024-07-15 12:51:56.628106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.915 [2024-07-15 12:51:56.628279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.915 [2024-07-15 12:51:56.628279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.915 12:51:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.304 00:06:36.304 real 0m1.316s 00:06:36.304 user 0m4.470s 00:06:36.304 sys 0m0.125s 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.304 12:51:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:36.304 ************************************ 00:06:36.304 END TEST accel_decomp_full_mcore 00:06:36.304 ************************************ 00:06:36.304 12:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.304 12:51:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.304 12:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:36.304 12:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.304 12:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.304 ************************************ 00:06:36.304 START TEST accel_decomp_mthread 00:06:36.304 ************************************ 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.304 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:36.305 12:51:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:36.305 [2024-07-15 12:51:57.880688] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:36.305 [2024-07-15 12:51:57.880774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476924 ] 00:06:36.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.305 [2024-07-15 12:51:57.948786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.305 [2024-07-15 12:51:58.012684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:36.305 12:51:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.693 00:06:37.693 real 0m1.297s 00:06:37.693 user 0m1.203s 00:06:37.693 sys 0m0.106s 00:06:37.693 12:51:59 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.694 12:51:59 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:37.694 ************************************ 00:06:37.694 END TEST accel_decomp_mthread 00:06:37.694 ************************************ 00:06:37.694 12:51:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.694 12:51:59 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.694 12:51:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:37.694 12:51:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.694 12:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.694 ************************************ 00:06:37.694 START TEST accel_decomp_full_mthread 00:06:37.694 ************************************ 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:37.694 [2024-07-15 12:51:59.256445] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:37.694 [2024-07-15 12:51:59.256534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477273 ] 00:06:37.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.694 [2024-07-15 12:51:59.328678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.694 [2024-07-15 12:51:59.394248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.694 12:51:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.081 00:06:39.081 real 0m1.331s 00:06:39.081 user 0m1.232s 00:06:39.081 sys 0m0.111s 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.081 12:52:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:39.081 ************************************ 00:06:39.081 END TEST accel_decomp_full_mthread 00:06:39.081 ************************************ 00:06:39.081 12:52:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.081 12:52:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:39.081 12:52:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.081 12:52:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:39.081 12:52:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:39.081 12:52:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.081 12:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.081 12:52:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.081 12:52:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.081 12:52:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.081 12:52:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.081 12:52:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.081 12:52:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:39.081 12:52:00 accel -- accel/accel.sh@41 -- # jq -r . 00:06:39.081 ************************************ 00:06:39.081 START TEST accel_dif_functional_tests 00:06:39.081 ************************************ 00:06:39.081 12:52:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:39.081 [2024-07-15 12:52:00.680572] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:39.081 [2024-07-15 12:52:00.680625] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477487 ] 00:06:39.081 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.081 [2024-07-15 12:52:00.749512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.081 [2024-07-15 12:52:00.822882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.081 [2024-07-15 12:52:00.823001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.081 [2024-07-15 12:52:00.823004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.081 00:06:39.081 00:06:39.081 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.081 http://cunit.sourceforge.net/ 00:06:39.081 00:06:39.081 00:06:39.081 Suite: accel_dif 00:06:39.081 Test: verify: DIF generated, GUARD check ...passed 00:06:39.081 Test: verify: DIF generated, APPTAG check ...passed 00:06:39.081 Test: verify: DIF generated, REFTAG check ...passed 00:06:39.081 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:52:00.878686] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.081 passed 00:06:39.081 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:52:00.878728] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.081 passed 00:06:39.081 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:52:00.878749] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.081 passed 00:06:39.081 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:39.081 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:52:00.878797] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:39.081 passed 00:06:39.081 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:39.081 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:39.081 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:39.081 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:52:00.878909] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:39.081 passed 00:06:39.081 Test: verify copy: DIF generated, GUARD check ...passed 00:06:39.081 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:39.081 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:39.081 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:52:00.879026] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:39.081 passed 00:06:39.081 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:52:00.879049] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:39.081 passed 00:06:39.081 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:52:00.879070] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:39.081 passed 00:06:39.081 Test: generate copy: DIF generated, GUARD check ...passed 00:06:39.081 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:39.081 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:39.081 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:39.081 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:39.081 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:39.081 Test: generate copy: iovecs-len validate ...[2024-07-15 12:52:00.879257] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:39.081 passed 00:06:39.081 Test: generate copy: buffer alignment validate ...passed 00:06:39.081 00:06:39.081 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.081 suites 1 1 n/a 0 0 00:06:39.081 tests 26 26 26 0 0 00:06:39.081 asserts 115 115 115 0 n/a 00:06:39.081 00:06:39.081 Elapsed time = 0.002 seconds 00:06:39.342 00:06:39.342 real 0m0.365s 00:06:39.342 user 0m0.494s 00:06:39.342 sys 0m0.136s 00:06:39.342 12:52:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.342 12:52:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:39.342 ************************************ 00:06:39.342 END TEST accel_dif_functional_tests 00:06:39.342 ************************************ 00:06:39.342 12:52:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.342 00:06:39.342 real 0m30.254s 00:06:39.342 user 0m33.764s 00:06:39.342 sys 0m4.240s 00:06:39.342 12:52:01 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.342 12:52:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.342 ************************************ 00:06:39.342 END TEST accel 00:06:39.342 ************************************ 00:06:39.342 12:52:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.342 12:52:01 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:39.342 12:52:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.342 12:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.342 12:52:01 -- common/autotest_common.sh@10 -- # set +x 00:06:39.342 ************************************ 00:06:39.342 START TEST accel_rpc 00:06:39.342 ************************************ 00:06:39.342 12:52:01 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:39.603 * Looking for test storage... 00:06:39.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:39.603 12:52:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:39.603 12:52:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=477695 00:06:39.603 12:52:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 477695 00:06:39.603 12:52:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 477695 ']' 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.603 12:52:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.603 [2024-07-15 12:52:01.266732] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:39.603 [2024-07-15 12:52:01.266805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477695 ] 00:06:39.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.603 [2024-07-15 12:52:01.337681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.603 [2024-07-15 12:52:01.411442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ************************************ 00:06:40.544 START TEST accel_assign_opcode 00:06:40.544 ************************************ 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 [2024-07-15 12:52:02.069365] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 [2024-07-15 12:52:02.081392] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.544 software 00:06:40.544 00:06:40.544 real 0m0.212s 00:06:40.544 user 0m0.050s 00:06:40.544 sys 0m0.011s 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.544 12:52:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:40.544 ************************************ 00:06:40.544 END TEST accel_assign_opcode 00:06:40.544 ************************************ 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:40.544 12:52:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 477695 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 477695 ']' 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 477695 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 477695 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 477695' 00:06:40.544 killing process with pid 477695 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@967 -- # kill 477695 00:06:40.544 12:52:02 accel_rpc -- common/autotest_common.sh@972 -- # wait 477695 00:06:40.806 00:06:40.806 real 0m1.468s 00:06:40.806 user 0m1.536s 00:06:40.806 sys 0m0.419s 00:06:40.806 12:52:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.806 12:52:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.806 ************************************ 00:06:40.806 END TEST accel_rpc 00:06:40.806 ************************************ 00:06:40.806 12:52:02 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.806 12:52:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:40.806 12:52:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.806 12:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.806 12:52:02 -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 ************************************ 00:06:41.067 START TEST app_cmdline 00:06:41.067 ************************************ 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:41.067 * Looking for test storage... 00:06:41.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:41.067 12:52:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:41.067 12:52:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=478105 00:06:41.067 12:52:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 478105 00:06:41.067 12:52:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 478105 ']' 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.067 12:52:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 [2024-07-15 12:52:02.805310] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:06:41.067 [2024-07-15 12:52:02.805383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478105 ] 00:06:41.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.067 [2024-07-15 12:52:02.875565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.328 [2024-07-15 12:52:02.949483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.902 12:52:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.902 12:52:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:41.902 { 00:06:41.902 "version": "SPDK v24.09-pre git sha1 c6070605c", 00:06:41.902 "fields": { 00:06:41.902 "major": 24, 00:06:41.902 "minor": 9, 00:06:41.902 "patch": 0, 00:06:41.902 "suffix": "-pre", 00:06:41.902 "commit": "c6070605c" 00:06:41.902 } 00:06:41.902 } 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:41.902 12:52:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:41.902 12:52:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.902 12:52:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.902 12:52:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.163 12:52:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:42.163 12:52:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:42.163 12:52:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:42.163 request: 00:06:42.163 { 00:06:42.163 "method": "env_dpdk_get_mem_stats", 00:06:42.163 "req_id": 1 00:06:42.163 } 00:06:42.163 Got JSON-RPC error response 00:06:42.163 response: 00:06:42.163 { 00:06:42.163 "code": -32601, 00:06:42.163 "message": "Method not found" 00:06:42.163 } 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.163 12:52:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 478105 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 478105 ']' 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 478105 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 478105 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 478105' 00:06:42.163 killing process with pid 478105 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 478105 00:06:42.163 12:52:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 478105 00:06:42.424 00:06:42.424 real 0m1.542s 00:06:42.424 user 0m1.812s 00:06:42.424 sys 0m0.427s 00:06:42.424 12:52:04 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.424 12:52:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.424 ************************************ 00:06:42.424 END TEST app_cmdline 00:06:42.424 ************************************ 00:06:42.424 12:52:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.424 12:52:04 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:42.424 12:52:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.424 12:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.424 12:52:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.686 ************************************ 00:06:42.686 START TEST version 00:06:42.686 ************************************ 00:06:42.686 12:52:04 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:42.686 * Looking for test storage... 00:06:42.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.686 12:52:04 version -- app/version.sh@17 -- # get_header_version major 00:06:42.686 12:52:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # cut -f2 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.686 12:52:04 version -- app/version.sh@17 -- # major=24 00:06:42.686 12:52:04 version -- app/version.sh@18 -- # get_header_version minor 00:06:42.686 12:52:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # cut -f2 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.686 12:52:04 version -- app/version.sh@18 -- # minor=9 00:06:42.686 12:52:04 version -- app/version.sh@19 -- # get_header_version patch 00:06:42.686 12:52:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # cut -f2 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.686 12:52:04 version -- app/version.sh@19 -- # patch=0 00:06:42.686 12:52:04 version -- app/version.sh@20 -- # get_header_version suffix 00:06:42.686 12:52:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # cut -f2 00:06:42.686 12:52:04 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.686 12:52:04 version -- app/version.sh@20 -- # suffix=-pre 00:06:42.686 12:52:04 version -- app/version.sh@22 -- # version=24.9 00:06:42.686 12:52:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.686 12:52:04 version -- app/version.sh@28 -- # version=24.9rc0 00:06:42.686 12:52:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.686 12:52:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.686 12:52:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:42.686 12:52:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:42.686 00:06:42.686 real 0m0.176s 00:06:42.686 user 0m0.085s 00:06:42.686 sys 0m0.130s 00:06:42.686 12:52:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.686 12:52:04 version -- common/autotest_common.sh@10 -- # set +x 00:06:42.686 ************************************ 00:06:42.686 END TEST version 00:06:42.686 ************************************ 00:06:42.686 12:52:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.686 12:52:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:42.686 12:52:04 -- spdk/autotest.sh@198 -- # uname -s 00:06:42.686 12:52:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:42.686 12:52:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:42.686 12:52:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:42.686 12:52:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:42.686 12:52:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:42.686 12:52:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:42.686 12:52:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.686 12:52:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 12:52:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:42.947 12:52:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:42.947 12:52:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:42.947 12:52:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:42.947 12:52:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:42.947 12:52:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:42.947 12:52:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.947 12:52:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.947 12:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.947 12:52:04 -- common/autotest_common.sh@10 -- # set +x 00:06:42.947 ************************************ 00:06:42.947 START TEST nvmf_tcp 00:06:42.947 ************************************ 00:06:42.947 12:52:04 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.947 * Looking for test storage... 00:06:42.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.947 12:52:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.948 12:52:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.948 12:52:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.948 12:52:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.948 12:52:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.948 12:52:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.948 12:52:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.948 12:52:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:42.948 12:52:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:42.948 12:52:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.948 12:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:42.948 12:52:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:42.948 12:52:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:42.948 12:52:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.948 12:52:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:42.948 ************************************ 00:06:42.948 START TEST nvmf_example 00:06:42.948 ************************************ 00:06:42.948 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:43.210 * Looking for test storage... 00:06:43.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.210 12:52:04 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:43.211 12:52:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:51.359 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:51.359 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:51.359 Found net devices under 0000:31:00.0: cvl_0_0 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:51.359 Found net devices under 0000:31:00.1: cvl_0_1 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.359 12:52:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:06:51.359 00:06:51.359 --- 10.0.0.2 ping statistics --- 00:06:51.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.359 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:06:51.359 00:06:51.359 --- 10.0.0.1 ping statistics --- 00:06:51.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.359 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.359 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=482887 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 482887 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 482887 ']' 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.360 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.620 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.215 12:52:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.215 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:52.509 12:52:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:52.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.499 Initializing NVMe Controllers 00:07:02.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.499 Initialization complete. Launching workers. 00:07:02.499 ======================================================== 00:07:02.499 Latency(us) 00:07:02.499 Device Information : IOPS MiB/s Average min max 00:07:02.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18222.46 71.18 3511.82 843.03 15973.77 00:07:02.499 ======================================================== 00:07:02.499 Total : 18222.46 71.18 3511.82 843.03 15973.77 00:07:02.499 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.759 rmmod nvme_tcp 00:07:02.759 rmmod nvme_fabrics 00:07:02.759 rmmod nvme_keyring 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 482887 ']' 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 482887 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 482887 ']' 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 482887 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482887 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482887' 00:07:02.759 killing process with pid 482887 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 482887 00:07:02.759 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 482887 00:07:02.759 nvmf threads initialize successfully 00:07:02.759 bdev subsystem init successfully 00:07:02.759 created a nvmf target service 00:07:02.759 create targets's poll groups done 00:07:02.759 all subsystems of target started 00:07:02.759 nvmf target is running 00:07:02.759 all subsystems of target stopped 00:07:02.759 destroy targets's poll groups done 00:07:02.759 destroyed the nvmf target service 00:07:02.759 bdev subsystem finish successfully 00:07:02.759 nvmf threads destroy successfully 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.029 12:52:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.952 00:07:04.952 real 0m21.969s 00:07:04.952 user 0m46.931s 00:07:04.952 sys 0m7.113s 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.952 12:52:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:04.952 ************************************ 00:07:04.952 END TEST nvmf_example 00:07:04.952 ************************************ 00:07:04.952 12:52:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.952 12:52:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:04.952 12:52:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.952 12:52:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.952 12:52:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.215 ************************************ 00:07:05.215 START TEST nvmf_filesystem 00:07:05.215 ************************************ 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:05.215 * Looking for test storage... 00:07:05.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:05.215 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:05.216 #define SPDK_CONFIG_H 00:07:05.216 #define SPDK_CONFIG_APPS 1 00:07:05.216 #define SPDK_CONFIG_ARCH native 00:07:05.216 #undef SPDK_CONFIG_ASAN 00:07:05.216 #undef SPDK_CONFIG_AVAHI 00:07:05.216 #undef SPDK_CONFIG_CET 00:07:05.216 #define SPDK_CONFIG_COVERAGE 1 00:07:05.216 #define SPDK_CONFIG_CROSS_PREFIX 00:07:05.216 #undef SPDK_CONFIG_CRYPTO 00:07:05.216 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:05.216 #undef SPDK_CONFIG_CUSTOMOCF 00:07:05.216 #undef SPDK_CONFIG_DAOS 00:07:05.216 #define SPDK_CONFIG_DAOS_DIR 00:07:05.216 #define SPDK_CONFIG_DEBUG 1 00:07:05.216 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:05.216 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:05.216 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:05.216 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:05.216 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:05.216 #undef SPDK_CONFIG_DPDK_UADK 00:07:05.216 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:05.216 #define SPDK_CONFIG_EXAMPLES 1 00:07:05.216 #undef SPDK_CONFIG_FC 00:07:05.216 #define SPDK_CONFIG_FC_PATH 00:07:05.216 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:05.216 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:05.216 #undef SPDK_CONFIG_FUSE 00:07:05.216 #undef SPDK_CONFIG_FUZZER 00:07:05.216 #define SPDK_CONFIG_FUZZER_LIB 00:07:05.216 #undef SPDK_CONFIG_GOLANG 00:07:05.216 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:05.216 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:05.216 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:05.216 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:05.216 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:05.216 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:05.216 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:05.216 #define SPDK_CONFIG_IDXD 1 00:07:05.216 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:05.216 #undef SPDK_CONFIG_IPSEC_MB 00:07:05.216 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:05.216 #define SPDK_CONFIG_ISAL 1 00:07:05.216 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:05.216 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:05.216 #define SPDK_CONFIG_LIBDIR 00:07:05.216 #undef SPDK_CONFIG_LTO 00:07:05.216 #define SPDK_CONFIG_MAX_LCORES 128 00:07:05.216 #define SPDK_CONFIG_NVME_CUSE 1 00:07:05.216 #undef SPDK_CONFIG_OCF 00:07:05.216 #define SPDK_CONFIG_OCF_PATH 00:07:05.216 #define SPDK_CONFIG_OPENSSL_PATH 00:07:05.216 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:05.216 #define SPDK_CONFIG_PGO_DIR 00:07:05.216 #undef SPDK_CONFIG_PGO_USE 00:07:05.216 #define SPDK_CONFIG_PREFIX /usr/local 00:07:05.216 #undef SPDK_CONFIG_RAID5F 00:07:05.216 #undef SPDK_CONFIG_RBD 00:07:05.216 #define SPDK_CONFIG_RDMA 1 00:07:05.216 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:05.216 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:05.216 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:05.216 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:05.216 #define SPDK_CONFIG_SHARED 1 00:07:05.216 #undef SPDK_CONFIG_SMA 00:07:05.216 #define SPDK_CONFIG_TESTS 1 00:07:05.216 #undef SPDK_CONFIG_TSAN 00:07:05.216 #define SPDK_CONFIG_UBLK 1 00:07:05.216 #define SPDK_CONFIG_UBSAN 1 00:07:05.216 #undef SPDK_CONFIG_UNIT_TESTS 00:07:05.216 #undef SPDK_CONFIG_URING 00:07:05.216 #define SPDK_CONFIG_URING_PATH 00:07:05.216 #undef SPDK_CONFIG_URING_ZNS 00:07:05.216 #undef SPDK_CONFIG_USDT 00:07:05.216 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:05.216 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:05.216 #define SPDK_CONFIG_VFIO_USER 1 00:07:05.216 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:05.216 #define SPDK_CONFIG_VHOST 1 00:07:05.216 #define SPDK_CONFIG_VIRTIO 1 00:07:05.216 #undef SPDK_CONFIG_VTUNE 00:07:05.216 #define SPDK_CONFIG_VTUNE_DIR 00:07:05.216 #define SPDK_CONFIG_WERROR 1 00:07:05.216 #define SPDK_CONFIG_WPDK_DIR 00:07:05.216 #undef SPDK_CONFIG_XNVME 00:07:05.216 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:05.216 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.217 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 485686 ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 485686 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.hrY8qh 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hrY8qh/tests/target /tmp/spdk.hrY8qh 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:05.218 12:52:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122775105536 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6595874816 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864253440 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9945088 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683855872 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1634304 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:05.218 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:05.219 * Looking for test storage... 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122775105536 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8810467328 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.219 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.480 12:52:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.481 12:52:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.613 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:13.614 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:13.614 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:13.614 Found net devices under 0000:31:00.0: cvl_0_0 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:13.614 Found net devices under 0000:31:00.1: cvl_0_1 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.614 12:52:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:07:13.614 00:07:13.614 --- 10.0.0.2 ping statistics --- 00:07:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.614 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:13.614 00:07:13.614 --- 10.0.0.1 ping statistics --- 00:07:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.614 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.614 ************************************ 00:07:13.614 START TEST nvmf_filesystem_no_in_capsule 00:07:13.614 ************************************ 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=489807 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 489807 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 489807 ']' 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.614 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.614 [2024-07-15 12:52:35.175224] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:07:13.614 [2024-07-15 12:52:35.175278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.614 [2024-07-15 12:52:35.254720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.614 [2024-07-15 12:52:35.328283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.614 [2024-07-15 12:52:35.328322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.614 [2024-07-15 12:52:35.328329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.614 [2024-07-15 12:52:35.328336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.614 [2024-07-15 12:52:35.328342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.614 [2024-07-15 12:52:35.328525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.614 [2024-07-15 12:52:35.328643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.614 [2024-07-15 12:52:35.328799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.614 [2024-07-15 12:52:35.328799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.187 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.187 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:14.187 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:14.187 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:14.187 12:52:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 [2024-07-15 12:52:36.039992] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 [2024-07-15 12:52:36.169720] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.449 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:14.449 { 00:07:14.449 "name": "Malloc1", 00:07:14.449 "aliases": [ 00:07:14.449 "3332aa00-632c-4725-b462-f94e2899f61e" 00:07:14.449 ], 00:07:14.449 "product_name": "Malloc disk", 00:07:14.449 "block_size": 512, 00:07:14.449 "num_blocks": 1048576, 00:07:14.449 "uuid": "3332aa00-632c-4725-b462-f94e2899f61e", 00:07:14.449 "assigned_rate_limits": { 00:07:14.449 "rw_ios_per_sec": 0, 00:07:14.449 "rw_mbytes_per_sec": 0, 00:07:14.449 "r_mbytes_per_sec": 0, 00:07:14.449 "w_mbytes_per_sec": 0 00:07:14.449 }, 00:07:14.449 "claimed": true, 00:07:14.449 "claim_type": "exclusive_write", 00:07:14.449 "zoned": false, 00:07:14.449 "supported_io_types": { 00:07:14.449 "read": true, 00:07:14.449 "write": true, 00:07:14.449 "unmap": true, 00:07:14.449 "flush": true, 00:07:14.449 "reset": true, 00:07:14.449 "nvme_admin": false, 00:07:14.449 "nvme_io": false, 00:07:14.449 "nvme_io_md": false, 00:07:14.449 "write_zeroes": true, 00:07:14.449 "zcopy": true, 00:07:14.449 "get_zone_info": false, 00:07:14.449 "zone_management": false, 00:07:14.449 "zone_append": false, 00:07:14.449 "compare": false, 00:07:14.449 "compare_and_write": false, 00:07:14.449 "abort": true, 00:07:14.449 "seek_hole": false, 00:07:14.450 "seek_data": false, 00:07:14.450 "copy": true, 00:07:14.450 "nvme_iov_md": false 00:07:14.450 }, 00:07:14.450 "memory_domains": [ 00:07:14.450 { 00:07:14.450 "dma_device_id": "system", 00:07:14.450 "dma_device_type": 1 00:07:14.450 }, 00:07:14.450 { 00:07:14.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.450 "dma_device_type": 2 00:07:14.450 } 00:07:14.450 ], 00:07:14.450 "driver_specific": {} 00:07:14.450 } 00:07:14.450 ]' 00:07:14.450 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:14.450 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:14.450 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:14.711 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:14.711 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:14.711 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:14.711 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:14.711 12:52:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.097 12:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.097 12:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:16.097 12:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.097 12:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:16.097 12:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.023 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.285 12:52:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:18.544 12:52:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:19.116 12:52:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.058 ************************************ 00:07:20.058 START TEST filesystem_ext4 00:07:20.058 ************************************ 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:20.058 12:52:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:20.058 mke2fs 1.46.5 (30-Dec-2021) 00:07:20.320 Discarding device blocks: 0/522240 done 00:07:20.320 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:20.320 Filesystem UUID: 54cca0b5-e30c-4a0f-b9f3-d2eaaca5744a 00:07:20.320 Superblock backups stored on blocks: 00:07:20.320 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:20.320 00:07:20.320 Allocating group tables: 0/64 done 00:07:20.320 Writing inode tables: 0/64 done 00:07:23.620 Creating journal (8192 blocks): done 00:07:23.620 Writing superblocks and filesystem accounting information: 0/64 done 00:07:23.620 00:07:23.620 12:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:23.620 12:52:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 489807 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:23.620 00:07:23.620 real 0m3.580s 00:07:23.620 user 0m0.021s 00:07:23.620 sys 0m0.055s 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.620 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:23.620 ************************************ 00:07:23.620 END TEST filesystem_ext4 00:07:23.620 ************************************ 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 ************************************ 00:07:23.881 START TEST filesystem_btrfs 00:07:23.881 ************************************ 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:23.881 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:23.881 btrfs-progs v6.6.2 00:07:23.881 See https://btrfs.readthedocs.io for more information. 00:07:23.881 00:07:23.881 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:23.881 NOTE: several default settings have changed in version 5.15, please make sure 00:07:23.881 this does not affect your deployments: 00:07:23.881 - DUP for metadata (-m dup) 00:07:23.881 - enabled no-holes (-O no-holes) 00:07:23.881 - enabled free-space-tree (-R free-space-tree) 00:07:23.881 00:07:23.881 Label: (null) 00:07:23.881 UUID: 85d235f2-6a83-4bb4-8b3e-d94e2878686f 00:07:23.881 Node size: 16384 00:07:23.881 Sector size: 4096 00:07:23.881 Filesystem size: 510.00MiB 00:07:23.881 Block group profiles: 00:07:23.881 Data: single 8.00MiB 00:07:23.881 Metadata: DUP 32.00MiB 00:07:23.881 System: DUP 8.00MiB 00:07:23.881 SSD detected: yes 00:07:23.881 Zoned device: no 00:07:23.881 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:23.881 Runtime features: free-space-tree 00:07:23.882 Checksum: crc32c 00:07:23.882 Number of devices: 1 00:07:23.882 Devices: 00:07:23.882 ID SIZE PATH 00:07:23.882 1 510.00MiB /dev/nvme0n1p1 00:07:23.882 00:07:24.142 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:24.142 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:24.403 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:24.403 12:52:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 489807 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:24.403 00:07:24.403 real 0m0.527s 00:07:24.403 user 0m0.023s 00:07:24.403 sys 0m0.066s 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:24.403 ************************************ 00:07:24.403 END TEST filesystem_btrfs 00:07:24.403 ************************************ 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.403 ************************************ 00:07:24.403 START TEST filesystem_xfs 00:07:24.403 ************************************ 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:24.403 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:24.403 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:24.403 = sectsz=512 attr=2, projid32bit=1 00:07:24.403 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:24.403 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:24.403 data = bsize=4096 blocks=130560, imaxpct=25 00:07:24.403 = sunit=0 swidth=0 blks 00:07:24.403 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:24.403 log =internal log bsize=4096 blocks=16384, version=2 00:07:24.403 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:24.403 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:25.344 Discarding blocks...Done. 00:07:25.344 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:25.344 12:52:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 489807 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.257 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.518 00:07:27.518 real 0m2.968s 00:07:27.518 user 0m0.033s 00:07:27.518 sys 0m0.047s 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:27.518 ************************************ 00:07:27.518 END TEST filesystem_xfs 00:07:27.518 ************************************ 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:27.518 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 489807 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 489807 ']' 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 489807 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 489807 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 489807' 00:07:27.779 killing process with pid 489807 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 489807 00:07:27.779 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 489807 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:28.064 00:07:28.064 real 0m14.712s 00:07:28.064 user 0m58.031s 00:07:28.064 sys 0m1.111s 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.064 ************************************ 00:07:28.064 END TEST nvmf_filesystem_no_in_capsule 00:07:28.064 ************************************ 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.064 12:52:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.324 ************************************ 00:07:28.324 START TEST nvmf_filesystem_in_capsule 00:07:28.324 ************************************ 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=492927 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 492927 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 492927 ']' 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.324 12:52:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.324 [2024-07-15 12:52:49.968847] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:07:28.324 [2024-07-15 12:52:49.968891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.324 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.324 [2024-07-15 12:52:50.050152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.324 [2024-07-15 12:52:50.116301] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.324 [2024-07-15 12:52:50.116345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.324 [2024-07-15 12:52:50.116353] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.324 [2024-07-15 12:52:50.116360] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.324 [2024-07-15 12:52:50.116366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.324 [2024-07-15 12:52:50.116509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.324 [2024-07-15 12:52:50.116622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.324 [2024-07-15 12:52:50.116778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.324 [2024-07-15 12:52:50.116778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 [2024-07-15 12:52:50.791843] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 [2024-07-15 12:52:50.918578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:29.263 { 00:07:29.263 "name": "Malloc1", 00:07:29.263 "aliases": [ 00:07:29.263 "3a41b32e-89a5-4e67-81ed-f54c4abdf4f1" 00:07:29.263 ], 00:07:29.263 "product_name": "Malloc disk", 00:07:29.263 "block_size": 512, 00:07:29.263 "num_blocks": 1048576, 00:07:29.263 "uuid": "3a41b32e-89a5-4e67-81ed-f54c4abdf4f1", 00:07:29.263 "assigned_rate_limits": { 00:07:29.263 "rw_ios_per_sec": 0, 00:07:29.263 "rw_mbytes_per_sec": 0, 00:07:29.263 "r_mbytes_per_sec": 0, 00:07:29.263 "w_mbytes_per_sec": 0 00:07:29.263 }, 00:07:29.263 "claimed": true, 00:07:29.263 "claim_type": "exclusive_write", 00:07:29.263 "zoned": false, 00:07:29.263 "supported_io_types": { 00:07:29.263 "read": true, 00:07:29.263 "write": true, 00:07:29.263 "unmap": true, 00:07:29.263 "flush": true, 00:07:29.263 "reset": true, 00:07:29.263 "nvme_admin": false, 00:07:29.263 "nvme_io": false, 00:07:29.263 "nvme_io_md": false, 00:07:29.263 "write_zeroes": true, 00:07:29.263 "zcopy": true, 00:07:29.263 "get_zone_info": false, 00:07:29.263 "zone_management": false, 00:07:29.263 "zone_append": false, 00:07:29.263 "compare": false, 00:07:29.263 "compare_and_write": false, 00:07:29.263 "abort": true, 00:07:29.263 "seek_hole": false, 00:07:29.263 "seek_data": false, 00:07:29.263 "copy": true, 00:07:29.263 "nvme_iov_md": false 00:07:29.263 }, 00:07:29.263 "memory_domains": [ 00:07:29.263 { 00:07:29.263 "dma_device_id": "system", 00:07:29.263 "dma_device_type": 1 00:07:29.263 }, 00:07:29.263 { 00:07:29.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.263 "dma_device_type": 2 00:07:29.263 } 00:07:29.263 ], 00:07:29.263 "driver_specific": {} 00:07:29.263 } 00:07:29.263 ]' 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:29.263 12:52:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:29.263 12:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:29.263 12:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:29.263 12:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:29.263 12:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:29.263 12:52:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.707 12:52:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.707 12:52:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:30.707 12:52:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.707 12:52:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:30.707 12:52:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:33.248 12:52:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.190 ************************************ 00:07:34.190 START TEST filesystem_in_capsule_ext4 00:07:34.190 ************************************ 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:34.190 12:52:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:34.190 mke2fs 1.46.5 (30-Dec-2021) 00:07:34.190 Discarding device blocks: 0/522240 done 00:07:34.190 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:34.190 Filesystem UUID: 9fad67ca-cb1e-4fa6-a717-f948ad1a593e 00:07:34.190 Superblock backups stored on blocks: 00:07:34.190 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:34.190 00:07:34.190 Allocating group tables: 0/64 done 00:07:34.190 Writing inode tables: 0/64 done 00:07:37.636 Creating journal (8192 blocks): done 00:07:37.636 Writing superblocks and filesystem accounting information: 0/64 done 00:07:37.636 00:07:37.636 12:52:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:37.636 12:52:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.896 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.896 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 492927 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.897 00:07:37.897 real 0m3.873s 00:07:37.897 user 0m0.024s 00:07:37.897 sys 0m0.053s 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.897 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.897 ************************************ 00:07:37.897 END TEST filesystem_in_capsule_ext4 00:07:37.897 ************************************ 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.157 ************************************ 00:07:38.157 START TEST filesystem_in_capsule_btrfs 00:07:38.157 ************************************ 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.157 btrfs-progs v6.6.2 00:07:38.157 See https://btrfs.readthedocs.io for more information. 00:07:38.157 00:07:38.157 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.157 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.157 this does not affect your deployments: 00:07:38.157 - DUP for metadata (-m dup) 00:07:38.157 - enabled no-holes (-O no-holes) 00:07:38.157 - enabled free-space-tree (-R free-space-tree) 00:07:38.157 00:07:38.157 Label: (null) 00:07:38.157 UUID: 764b15e0-75dd-4c9e-934d-de17a650247e 00:07:38.157 Node size: 16384 00:07:38.157 Sector size: 4096 00:07:38.157 Filesystem size: 510.00MiB 00:07:38.157 Block group profiles: 00:07:38.157 Data: single 8.00MiB 00:07:38.157 Metadata: DUP 32.00MiB 00:07:38.157 System: DUP 8.00MiB 00:07:38.157 SSD detected: yes 00:07:38.157 Zoned device: no 00:07:38.157 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.157 Runtime features: free-space-tree 00:07:38.157 Checksum: crc32c 00:07:38.157 Number of devices: 1 00:07:38.157 Devices: 00:07:38.157 ID SIZE PATH 00:07:38.157 1 510.00MiB /dev/nvme0n1p1 00:07:38.157 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:38.157 12:52:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.099 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.099 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 492927 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.100 00:07:39.100 real 0m1.062s 00:07:39.100 user 0m0.024s 00:07:39.100 sys 0m0.066s 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 ************************************ 00:07:39.100 END TEST filesystem_in_capsule_btrfs 00:07:39.100 ************************************ 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.100 ************************************ 00:07:39.100 START TEST filesystem_in_capsule_xfs 00:07:39.100 ************************************ 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:39.100 12:53:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.360 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.360 = sectsz=512 attr=2, projid32bit=1 00:07:39.360 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.360 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.360 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.360 = sunit=0 swidth=0 blks 00:07:39.360 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.360 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.360 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.360 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.303 Discarding blocks...Done. 00:07:40.303 12:53:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:40.303 12:53:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 492927 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.846 00:07:42.846 real 0m3.569s 00:07:42.846 user 0m0.026s 00:07:42.846 sys 0m0.054s 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.846 ************************************ 00:07:42.846 END TEST filesystem_in_capsule_xfs 00:07:42.846 ************************************ 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.846 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 492927 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 492927 ']' 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 492927 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492927 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492927' 00:07:43.107 killing process with pid 492927 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 492927 00:07:43.107 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 492927 00:07:43.368 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:43.368 00:07:43.368 real 0m15.055s 00:07:43.368 user 0m59.382s 00:07:43.368 sys 0m1.081s 00:07:43.368 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.368 12:53:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.368 ************************************ 00:07:43.368 END TEST nvmf_filesystem_in_capsule 00:07:43.368 ************************************ 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.368 rmmod nvme_tcp 00:07:43.368 rmmod nvme_fabrics 00:07:43.368 rmmod nvme_keyring 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.368 12:53:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.911 12:53:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.911 00:07:45.911 real 0m40.363s 00:07:45.911 user 1m59.869s 00:07:45.911 sys 0m8.224s 00:07:45.911 12:53:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.911 12:53:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.911 ************************************ 00:07:45.911 END TEST nvmf_filesystem 00:07:45.911 ************************************ 00:07:45.911 12:53:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.911 12:53:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.911 12:53:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.911 12:53:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.911 12:53:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.911 ************************************ 00:07:45.911 START TEST nvmf_target_discovery 00:07:45.911 ************************************ 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.911 * Looking for test storage... 00:07:45.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.911 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.912 12:53:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:54.042 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.042 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:54.042 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:54.043 Found net devices under 0000:31:00.0: cvl_0_0 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:54.043 Found net devices under 0000:31:00.1: cvl_0_1 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:07:54.043 00:07:54.043 --- 10.0.0.2 ping statistics --- 00:07:54.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.043 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:07:54.043 00:07:54.043 --- 10.0.0.1 ping statistics --- 00:07:54.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.043 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=500859 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 500859 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 500859 ']' 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.043 12:53:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.043 [2024-07-15 12:53:15.554446] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:07:54.043 [2024-07-15 12:53:15.554530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.043 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.043 [2024-07-15 12:53:15.636052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.043 [2024-07-15 12:53:15.710367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.043 [2024-07-15 12:53:15.710409] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.043 [2024-07-15 12:53:15.710417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.043 [2024-07-15 12:53:15.710424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.043 [2024-07-15 12:53:15.710430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.043 [2024-07-15 12:53:15.710575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.043 [2024-07-15 12:53:15.710691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.043 [2024-07-15 12:53:15.710844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.043 [2024-07-15 12:53:15.710845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 [2024-07-15 12:53:16.367840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 Null1 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.615 [2024-07-15 12:53:16.428143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.615 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 Null2 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 Null3 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.877 Null4 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.877 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:07:54.878 00:07:54.878 Discovery Log Number of Records 6, Generation counter 6 00:07:54.878 =====Discovery Log Entry 0====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: current discovery subsystem 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4420 00:07:54.878 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: explicit discovery connections, duplicate discovery information 00:07:54.878 sectype: none 00:07:54.878 =====Discovery Log Entry 1====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: nvme subsystem 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4420 00:07:54.878 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: none 00:07:54.878 sectype: none 00:07:54.878 =====Discovery Log Entry 2====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: nvme subsystem 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4420 00:07:54.878 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: none 00:07:54.878 sectype: none 00:07:54.878 =====Discovery Log Entry 3====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: nvme subsystem 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4420 00:07:54.878 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: none 00:07:54.878 sectype: none 00:07:54.878 =====Discovery Log Entry 4====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: nvme subsystem 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4420 00:07:54.878 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: none 00:07:54.878 sectype: none 00:07:54.878 =====Discovery Log Entry 5====== 00:07:54.878 trtype: tcp 00:07:54.878 adrfam: ipv4 00:07:54.878 subtype: discovery subsystem referral 00:07:54.878 treq: not required 00:07:54.878 portid: 0 00:07:54.878 trsvcid: 4430 00:07:54.878 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:54.878 traddr: 10.0.0.2 00:07:54.878 eflags: none 00:07:54.878 sectype: none 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:54.878 Perform nvmf subsystem discovery via RPC 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 [ 00:07:54.878 { 00:07:54.878 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:54.878 "subtype": "Discovery", 00:07:54.878 "listen_addresses": [ 00:07:54.878 { 00:07:54.878 "trtype": "TCP", 00:07:54.878 "adrfam": "IPv4", 00:07:54.878 "traddr": "10.0.0.2", 00:07:54.878 "trsvcid": "4420" 00:07:54.878 } 00:07:54.878 ], 00:07:54.878 "allow_any_host": true, 00:07:54.878 "hosts": [] 00:07:54.878 }, 00:07:54.878 { 00:07:54.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:54.878 "subtype": "NVMe", 00:07:54.878 "listen_addresses": [ 00:07:54.878 { 00:07:54.878 "trtype": "TCP", 00:07:54.878 "adrfam": "IPv4", 00:07:54.878 "traddr": "10.0.0.2", 00:07:54.878 "trsvcid": "4420" 00:07:54.878 } 00:07:54.878 ], 00:07:54.878 "allow_any_host": true, 00:07:54.878 "hosts": [], 00:07:54.878 "serial_number": "SPDK00000000000001", 00:07:54.878 "model_number": "SPDK bdev Controller", 00:07:54.878 "max_namespaces": 32, 00:07:54.878 "min_cntlid": 1, 00:07:54.878 "max_cntlid": 65519, 00:07:54.878 "namespaces": [ 00:07:54.878 { 00:07:54.878 "nsid": 1, 00:07:54.878 "bdev_name": "Null1", 00:07:54.878 "name": "Null1", 00:07:54.878 "nguid": "064248111FDA414BBD6E96062D827880", 00:07:54.878 "uuid": "06424811-1fda-414b-bd6e-96062d827880" 00:07:54.878 } 00:07:54.878 ] 00:07:54.878 }, 00:07:54.878 { 00:07:54.878 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:54.878 "subtype": "NVMe", 00:07:54.878 "listen_addresses": [ 00:07:54.878 { 00:07:54.878 "trtype": "TCP", 00:07:54.878 "adrfam": "IPv4", 00:07:54.878 "traddr": "10.0.0.2", 00:07:54.878 "trsvcid": "4420" 00:07:54.878 } 00:07:54.878 ], 00:07:54.878 "allow_any_host": true, 00:07:54.878 "hosts": [], 00:07:54.878 "serial_number": "SPDK00000000000002", 00:07:54.878 "model_number": "SPDK bdev Controller", 00:07:54.878 "max_namespaces": 32, 00:07:54.878 "min_cntlid": 1, 00:07:54.878 "max_cntlid": 65519, 00:07:54.878 "namespaces": [ 00:07:54.878 { 00:07:54.878 "nsid": 1, 00:07:54.878 "bdev_name": "Null2", 00:07:54.878 "name": "Null2", 00:07:54.878 "nguid": "1DF88F5F7EF340EE9BBB49F7D33D34C8", 00:07:54.878 "uuid": "1df88f5f-7ef3-40ee-9bbb-49f7d33d34c8" 00:07:54.878 } 00:07:54.878 ] 00:07:54.878 }, 00:07:54.878 { 00:07:54.878 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:54.878 "subtype": "NVMe", 00:07:54.878 "listen_addresses": [ 00:07:54.878 { 00:07:54.878 "trtype": "TCP", 00:07:54.878 "adrfam": "IPv4", 00:07:54.878 "traddr": "10.0.0.2", 00:07:54.878 "trsvcid": "4420" 00:07:54.878 } 00:07:54.878 ], 00:07:54.878 "allow_any_host": true, 00:07:54.878 "hosts": [], 00:07:54.878 "serial_number": "SPDK00000000000003", 00:07:54.878 "model_number": "SPDK bdev Controller", 00:07:54.878 "max_namespaces": 32, 00:07:54.878 "min_cntlid": 1, 00:07:54.878 "max_cntlid": 65519, 00:07:54.878 "namespaces": [ 00:07:54.878 { 00:07:54.878 "nsid": 1, 00:07:54.878 "bdev_name": "Null3", 00:07:54.878 "name": "Null3", 00:07:54.878 "nguid": "8172A907EFF147AA9C8868CA16CCE7F3", 00:07:54.878 "uuid": "8172a907-eff1-47aa-9c88-68ca16cce7f3" 00:07:54.878 } 00:07:54.878 ] 00:07:54.878 }, 00:07:54.878 { 00:07:54.878 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:54.878 "subtype": "NVMe", 00:07:54.878 "listen_addresses": [ 00:07:54.878 { 00:07:54.878 "trtype": "TCP", 00:07:54.878 "adrfam": "IPv4", 00:07:54.878 "traddr": "10.0.0.2", 00:07:54.878 "trsvcid": "4420" 00:07:54.878 } 00:07:54.878 ], 00:07:54.878 "allow_any_host": true, 00:07:54.878 "hosts": [], 00:07:54.878 "serial_number": "SPDK00000000000004", 00:07:54.878 "model_number": "SPDK bdev Controller", 00:07:54.878 "max_namespaces": 32, 00:07:54.878 "min_cntlid": 1, 00:07:54.878 "max_cntlid": 65519, 00:07:54.878 "namespaces": [ 00:07:54.878 { 00:07:54.878 "nsid": 1, 00:07:54.878 "bdev_name": "Null4", 00:07:54.878 "name": "Null4", 00:07:54.878 "nguid": "D2A6D87611224D0893C85D413C266AE2", 00:07:54.878 "uuid": "d2a6d876-1122-4d08-93c8-5d413c266ae2" 00:07:54.878 } 00:07:54.878 ] 00:07:54.878 } 00:07:54.878 ] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.878 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.140 rmmod nvme_tcp 00:07:55.140 rmmod nvme_fabrics 00:07:55.140 rmmod nvme_keyring 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:55.140 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 500859 ']' 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 500859 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 500859 ']' 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 500859 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 500859 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 500859' 00:07:55.141 killing process with pid 500859 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 500859 00:07:55.141 12:53:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 500859 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.402 12:53:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.950 12:53:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.950 00:07:57.950 real 0m11.933s 00:07:57.950 user 0m8.027s 00:07:57.950 sys 0m6.297s 00:07:57.950 12:53:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.950 12:53:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:57.950 ************************************ 00:07:57.950 END TEST nvmf_target_discovery 00:07:57.950 ************************************ 00:07:57.950 12:53:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:57.950 12:53:19 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:57.950 12:53:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.950 12:53:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.950 12:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.950 ************************************ 00:07:57.950 START TEST nvmf_referrals 00:07:57.950 ************************************ 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:57.950 * Looking for test storage... 00:07:57.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.950 12:53:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.099 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:06.100 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:06.100 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:06.100 Found net devices under 0000:31:00.0: cvl_0_0 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:06.100 Found net devices under 0000:31:00.1: cvl_0_1 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:08:06.100 00:08:06.100 --- 10.0.0.2 ping statistics --- 00:08:06.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.100 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:06.100 00:08:06.100 --- 10.0.0.1 ping statistics --- 00:08:06.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.100 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=505906 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 505906 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 505906 ']' 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.100 12:53:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.100 [2024-07-15 12:53:27.597125] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:08:06.100 [2024-07-15 12:53:27.597179] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.100 [2024-07-15 12:53:27.670678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.100 [2024-07-15 12:53:27.735764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.100 [2024-07-15 12:53:27.735801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.100 [2024-07-15 12:53:27.735809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.100 [2024-07-15 12:53:27.735815] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.100 [2024-07-15 12:53:27.735820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.100 [2024-07-15 12:53:27.735965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.100 [2024-07-15 12:53:27.736077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.100 [2024-07-15 12:53:27.736236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.100 [2024-07-15 12:53:27.736249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 [2024-07-15 12:53:28.408847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 [2024-07-15 12:53:28.425015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.671 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.932 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:07.193 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.194 12:53:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.454 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.714 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.974 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.975 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.235 rmmod nvme_tcp 00:08:08.235 rmmod nvme_fabrics 00:08:08.235 rmmod nvme_keyring 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 505906 ']' 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 505906 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 505906 ']' 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 505906 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.235 12:53:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 505906 00:08:08.235 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.235 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.235 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 505906' 00:08:08.235 killing process with pid 505906 00:08:08.235 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 505906 00:08:08.235 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 505906 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.496 12:53:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.408 12:53:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.408 00:08:10.408 real 0m12.996s 00:08:10.408 user 0m12.954s 00:08:10.408 sys 0m6.490s 00:08:10.408 12:53:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.408 12:53:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.408 ************************************ 00:08:10.408 END TEST nvmf_referrals 00:08:10.408 ************************************ 00:08:10.669 12:53:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.669 12:53:32 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:10.669 12:53:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.669 12:53:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.669 12:53:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.669 ************************************ 00:08:10.669 START TEST nvmf_connect_disconnect 00:08:10.669 ************************************ 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:10.669 * Looking for test storage... 00:08:10.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.669 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.670 12:53:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:18.814 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:18.814 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:18.814 Found net devices under 0000:31:00.0: cvl_0_0 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:18.814 Found net devices under 0000:31:00.1: cvl_0_1 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:08:18.814 00:08:18.814 --- 10.0.0.2 ping statistics --- 00:08:18.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.814 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:08:18.814 00:08:18.814 --- 10.0.0.1 ping statistics --- 00:08:18.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.814 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=511028 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 511028 00:08:18.814 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 511028 ']' 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.815 12:53:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:18.815 [2024-07-15 12:53:40.618251] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:08:18.815 [2024-07-15 12:53:40.618306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.075 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.075 [2024-07-15 12:53:40.695714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.075 [2024-07-15 12:53:40.768975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.076 [2024-07-15 12:53:40.769016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.076 [2024-07-15 12:53:40.769025] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.076 [2024-07-15 12:53:40.769037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.076 [2024-07-15 12:53:40.769042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.076 [2024-07-15 12:53:40.769183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.076 [2024-07-15 12:53:40.769317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.076 [2024-07-15 12:53:40.769422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.076 [2024-07-15 12:53:40.769422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.646 [2024-07-15 12:53:41.443876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.646 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.907 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.908 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:19.908 [2024-07-15 12:53:41.503275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.908 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.908 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:19.908 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:19.908 12:53:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:24.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.337 rmmod nvme_tcp 00:08:38.337 rmmod nvme_fabrics 00:08:38.337 rmmod nvme_keyring 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 511028 ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 511028 ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 511028' 00:08:38.337 killing process with pid 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 511028 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.337 12:53:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.252 12:54:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.252 00:08:40.252 real 0m29.520s 00:08:40.252 user 1m17.910s 00:08:40.252 sys 0m7.118s 00:08:40.252 12:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.252 12:54:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.252 ************************************ 00:08:40.252 END TEST nvmf_connect_disconnect 00:08:40.252 ************************************ 00:08:40.252 12:54:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:40.252 12:54:01 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:40.252 12:54:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.252 12:54:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.252 12:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.252 ************************************ 00:08:40.252 START TEST nvmf_multitarget 00:08:40.252 ************************************ 00:08:40.252 12:54:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:40.252 * Looking for test storage... 00:08:40.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.252 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.253 12:54:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:48.405 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.405 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:48.406 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:48.406 Found net devices under 0000:31:00.0: cvl_0_0 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:48.406 Found net devices under 0000:31:00.1: cvl_0_1 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.406 12:54:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.406 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.406 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.406 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.406 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:08:48.668 00:08:48.668 --- 10.0.0.2 ping statistics --- 00:08:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.668 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:08:48.668 00:08:48.668 --- 10.0.0.1 ping statistics --- 00:08:48.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.668 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=520126 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 520126 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 520126 ']' 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.668 12:54:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.668 [2024-07-15 12:54:10.393819] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:08:48.668 [2024-07-15 12:54:10.393885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.668 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.668 [2024-07-15 12:54:10.474604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.929 [2024-07-15 12:54:10.550299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.929 [2024-07-15 12:54:10.550341] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.929 [2024-07-15 12:54:10.550349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.929 [2024-07-15 12:54:10.550356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.929 [2024-07-15 12:54:10.550362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.929 [2024-07-15 12:54:10.550446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.929 [2024-07-15 12:54:10.550564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.929 [2024-07-15 12:54:10.550722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.929 [2024-07-15 12:54:10.550723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:49.501 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:49.762 "nvmf_tgt_1" 00:08:49.762 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:49.762 "nvmf_tgt_2" 00:08:49.762 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:49.762 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:49.762 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:49.762 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:50.024 true 00:08:50.024 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:50.024 true 00:08:50.024 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:50.024 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.285 rmmod nvme_tcp 00:08:50.285 rmmod nvme_fabrics 00:08:50.285 rmmod nvme_keyring 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 520126 ']' 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 520126 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 520126 ']' 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 520126 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.285 12:54:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 520126 00:08:50.285 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.285 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.285 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 520126' 00:08:50.285 killing process with pid 520126 00:08:50.285 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 520126 00:08:50.285 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 520126 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.546 12:54:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.459 12:54:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.459 00:08:52.459 real 0m12.306s 00:08:52.459 user 0m9.389s 00:08:52.459 sys 0m6.587s 00:08:52.459 12:54:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.459 12:54:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:52.459 ************************************ 00:08:52.459 END TEST nvmf_multitarget 00:08:52.459 ************************************ 00:08:52.459 12:54:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.459 12:54:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:52.459 12:54:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.459 12:54:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.459 12:54:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.721 ************************************ 00:08:52.721 START TEST nvmf_rpc 00:08:52.721 ************************************ 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:52.721 * Looking for test storage... 00:08:52.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:52.721 12:54:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.722 12:54:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:00.867 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:00.867 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:00.867 Found net devices under 0000:31:00.0: cvl_0_0 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:00.867 Found net devices under 0000:31:00.1: cvl_0_1 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:09:00.867 00:09:00.867 --- 10.0.0.2 ping statistics --- 00:09:00.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.867 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:09:00.867 00:09:00.867 --- 10.0.0.1 ping statistics --- 00:09:00.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.867 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=525257 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 525257 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 525257 ']' 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.867 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.868 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.868 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.868 12:54:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.868 [2024-07-15 12:54:22.566538] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:09:00.868 [2024-07-15 12:54:22.566600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.868 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.868 [2024-07-15 12:54:22.650156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.127 [2024-07-15 12:54:22.725228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.127 [2024-07-15 12:54:22.725274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.127 [2024-07-15 12:54:22.725282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.127 [2024-07-15 12:54:22.725289] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.127 [2024-07-15 12:54:22.725294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.127 [2024-07-15 12:54:22.725376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.127 [2024-07-15 12:54:22.725493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.127 [2024-07-15 12:54:22.725650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.127 [2024-07-15 12:54:22.725651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:01.696 "tick_rate": 2400000000, 00:09:01.696 "poll_groups": [ 00:09:01.696 { 00:09:01.696 "name": "nvmf_tgt_poll_group_000", 00:09:01.696 "admin_qpairs": 0, 00:09:01.696 "io_qpairs": 0, 00:09:01.696 "current_admin_qpairs": 0, 00:09:01.696 "current_io_qpairs": 0, 00:09:01.696 "pending_bdev_io": 0, 00:09:01.696 "completed_nvme_io": 0, 00:09:01.696 "transports": [] 00:09:01.696 }, 00:09:01.696 { 00:09:01.696 "name": "nvmf_tgt_poll_group_001", 00:09:01.696 "admin_qpairs": 0, 00:09:01.696 "io_qpairs": 0, 00:09:01.696 "current_admin_qpairs": 0, 00:09:01.696 "current_io_qpairs": 0, 00:09:01.696 "pending_bdev_io": 0, 00:09:01.696 "completed_nvme_io": 0, 00:09:01.696 "transports": [] 00:09:01.696 }, 00:09:01.696 { 00:09:01.696 "name": "nvmf_tgt_poll_group_002", 00:09:01.696 "admin_qpairs": 0, 00:09:01.696 "io_qpairs": 0, 00:09:01.696 "current_admin_qpairs": 0, 00:09:01.696 "current_io_qpairs": 0, 00:09:01.696 "pending_bdev_io": 0, 00:09:01.696 "completed_nvme_io": 0, 00:09:01.696 "transports": [] 00:09:01.696 }, 00:09:01.696 { 00:09:01.696 "name": "nvmf_tgt_poll_group_003", 00:09:01.696 "admin_qpairs": 0, 00:09:01.696 "io_qpairs": 0, 00:09:01.696 "current_admin_qpairs": 0, 00:09:01.696 "current_io_qpairs": 0, 00:09:01.696 "pending_bdev_io": 0, 00:09:01.696 "completed_nvme_io": 0, 00:09:01.696 "transports": [] 00:09:01.696 } 00:09:01.696 ] 00:09:01.696 }' 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.696 [2024-07-15 12:54:23.509208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.696 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:01.955 "tick_rate": 2400000000, 00:09:01.955 "poll_groups": [ 00:09:01.955 { 00:09:01.955 "name": "nvmf_tgt_poll_group_000", 00:09:01.955 "admin_qpairs": 0, 00:09:01.955 "io_qpairs": 0, 00:09:01.955 "current_admin_qpairs": 0, 00:09:01.955 "current_io_qpairs": 0, 00:09:01.955 "pending_bdev_io": 0, 00:09:01.955 "completed_nvme_io": 0, 00:09:01.955 "transports": [ 00:09:01.955 { 00:09:01.955 "trtype": "TCP" 00:09:01.955 } 00:09:01.955 ] 00:09:01.955 }, 00:09:01.955 { 00:09:01.955 "name": "nvmf_tgt_poll_group_001", 00:09:01.955 "admin_qpairs": 0, 00:09:01.955 "io_qpairs": 0, 00:09:01.955 "current_admin_qpairs": 0, 00:09:01.955 "current_io_qpairs": 0, 00:09:01.955 "pending_bdev_io": 0, 00:09:01.955 "completed_nvme_io": 0, 00:09:01.955 "transports": [ 00:09:01.955 { 00:09:01.955 "trtype": "TCP" 00:09:01.955 } 00:09:01.955 ] 00:09:01.955 }, 00:09:01.955 { 00:09:01.955 "name": "nvmf_tgt_poll_group_002", 00:09:01.955 "admin_qpairs": 0, 00:09:01.955 "io_qpairs": 0, 00:09:01.955 "current_admin_qpairs": 0, 00:09:01.955 "current_io_qpairs": 0, 00:09:01.955 "pending_bdev_io": 0, 00:09:01.955 "completed_nvme_io": 0, 00:09:01.955 "transports": [ 00:09:01.955 { 00:09:01.955 "trtype": "TCP" 00:09:01.955 } 00:09:01.955 ] 00:09:01.955 }, 00:09:01.955 { 00:09:01.955 "name": "nvmf_tgt_poll_group_003", 00:09:01.955 "admin_qpairs": 0, 00:09:01.955 "io_qpairs": 0, 00:09:01.955 "current_admin_qpairs": 0, 00:09:01.955 "current_io_qpairs": 0, 00:09:01.955 "pending_bdev_io": 0, 00:09:01.955 "completed_nvme_io": 0, 00:09:01.955 "transports": [ 00:09:01.955 { 00:09:01.955 "trtype": "TCP" 00:09:01.955 } 00:09:01.955 ] 00:09:01.955 } 00:09:01.955 ] 00:09:01.955 }' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.955 Malloc1 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.955 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.956 [2024-07-15 12:54:23.696982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:01.956 [2024-07-15 12:54:23.723540] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:01.956 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:01.956 could not add new controller: failed to write to nvme-fabrics device 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.956 12:54:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.861 12:54:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.861 12:54:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.861 12:54:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.861 12:54:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.861 12:54:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.769 [2024-07-15 12:54:27.370840] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:05.769 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:05.769 could not add new controller: failed to write to nvme-fabrics device 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.769 12:54:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.153 12:54:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.153 12:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:07.153 12:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.153 12:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:07.153 12:54:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:09.700 12:54:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 [2024-07-15 12:54:31.050930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.700 12:54:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:11.086 12:54:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.086 12:54:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.086 12:54:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.086 12:54:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.086 12:54:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 [2024-07-15 12:54:34.670734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.094 12:54:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.481 12:54:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.481 12:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:14.481 12:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.481 12:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:14.481 12:54:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:16.397 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 [2024-07-15 12:54:38.342990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.658 12:54:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.045 12:54:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.045 12:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.045 12:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.045 12:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.045 12:54:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 [2024-07-15 12:54:42.007698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.593 12:54:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.979 12:54:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:21.980 12:54:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:21.980 12:54:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.980 12:54:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:21.980 12:54:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:23.891 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 [2024-07-15 12:54:45.675418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.892 12:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.804 12:54:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.804 12:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.804 12:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.804 12:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.804 12:54:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 [2024-07-15 12:54:49.343464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 [2024-07-15 12:54:49.403610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 [2024-07-15 12:54:49.467809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.715 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.716 [2024-07-15 12:54:49.528005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.716 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.974 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 [2024-07-15 12:54:49.588206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:27.975 "tick_rate": 2400000000, 00:09:27.975 "poll_groups": [ 00:09:27.975 { 00:09:27.975 "name": "nvmf_tgt_poll_group_000", 00:09:27.975 "admin_qpairs": 0, 00:09:27.975 "io_qpairs": 224, 00:09:27.975 "current_admin_qpairs": 0, 00:09:27.975 "current_io_qpairs": 0, 00:09:27.975 "pending_bdev_io": 0, 00:09:27.975 "completed_nvme_io": 225, 00:09:27.975 "transports": [ 00:09:27.975 { 00:09:27.975 "trtype": "TCP" 00:09:27.975 } 00:09:27.975 ] 00:09:27.975 }, 00:09:27.975 { 00:09:27.975 "name": "nvmf_tgt_poll_group_001", 00:09:27.975 "admin_qpairs": 1, 00:09:27.975 "io_qpairs": 223, 00:09:27.975 "current_admin_qpairs": 0, 00:09:27.975 "current_io_qpairs": 0, 00:09:27.975 "pending_bdev_io": 0, 00:09:27.975 "completed_nvme_io": 273, 00:09:27.975 "transports": [ 00:09:27.975 { 00:09:27.975 "trtype": "TCP" 00:09:27.975 } 00:09:27.975 ] 00:09:27.975 }, 00:09:27.975 { 00:09:27.975 "name": "nvmf_tgt_poll_group_002", 00:09:27.975 "admin_qpairs": 6, 00:09:27.975 "io_qpairs": 218, 00:09:27.975 "current_admin_qpairs": 0, 00:09:27.975 "current_io_qpairs": 0, 00:09:27.975 "pending_bdev_io": 0, 00:09:27.975 "completed_nvme_io": 512, 00:09:27.975 "transports": [ 00:09:27.975 { 00:09:27.975 "trtype": "TCP" 00:09:27.975 } 00:09:27.975 ] 00:09:27.975 }, 00:09:27.975 { 00:09:27.975 "name": "nvmf_tgt_poll_group_003", 00:09:27.975 "admin_qpairs": 0, 00:09:27.975 "io_qpairs": 224, 00:09:27.975 "current_admin_qpairs": 0, 00:09:27.975 "current_io_qpairs": 0, 00:09:27.975 "pending_bdev_io": 0, 00:09:27.975 "completed_nvme_io": 229, 00:09:27.975 "transports": [ 00:09:27.975 { 00:09:27.975 "trtype": "TCP" 00:09:27.975 } 00:09:27.975 ] 00:09:27.975 } 00:09:27.975 ] 00:09:27.975 }' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.975 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.975 rmmod nvme_tcp 00:09:27.975 rmmod nvme_fabrics 00:09:27.975 rmmod nvme_keyring 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 525257 ']' 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 525257 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 525257 ']' 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 525257 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 525257 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 525257' 00:09:28.235 killing process with pid 525257 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 525257 00:09:28.235 12:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 525257 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.235 12:54:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.780 12:54:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.780 00:09:30.780 real 0m37.777s 00:09:30.780 user 1m51.284s 00:09:30.780 sys 0m7.631s 00:09:30.780 12:54:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.780 12:54:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.780 ************************************ 00:09:30.780 END TEST nvmf_rpc 00:09:30.780 ************************************ 00:09:30.780 12:54:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:30.780 12:54:52 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:30.780 12:54:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.780 12:54:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.780 12:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:30.780 ************************************ 00:09:30.780 START TEST nvmf_invalid 00:09:30.780 ************************************ 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:30.780 * Looking for test storage... 00:09:30.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.780 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.781 12:54:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:38.922 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:38.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:38.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:38.923 Found net devices under 0000:31:00.0: cvl_0_0 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:38.923 Found net devices under 0000:31:00.1: cvl_0_1 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.923 12:54:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:09:38.923 00:09:38.923 --- 10.0.0.2 ping statistics --- 00:09:38.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.923 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:09:38.923 00:09:38.923 --- 10.0.0.1 ping statistics --- 00:09:38.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.923 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=535334 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 535334 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 535334 ']' 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.923 12:55:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:38.923 [2024-07-15 12:55:00.308092] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:09:38.923 [2024-07-15 12:55:00.308144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.923 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.923 [2024-07-15 12:55:00.384015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.923 [2024-07-15 12:55:00.454126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.923 [2024-07-15 12:55:00.454164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.923 [2024-07-15 12:55:00.454172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.923 [2024-07-15 12:55:00.454178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.923 [2024-07-15 12:55:00.454184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.923 [2024-07-15 12:55:00.454264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.923 [2024-07-15 12:55:00.454481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.924 [2024-07-15 12:55:00.454481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.924 [2024-07-15 12:55:00.454337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.492 12:55:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13391 00:09:39.493 [2024-07-15 12:55:01.269217] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:39.493 { 00:09:39.493 "nqn": "nqn.2016-06.io.spdk:cnode13391", 00:09:39.493 "tgt_name": "foobar", 00:09:39.493 "method": "nvmf_create_subsystem", 00:09:39.493 "req_id": 1 00:09:39.493 } 00:09:39.493 Got JSON-RPC error response 00:09:39.493 response: 00:09:39.493 { 00:09:39.493 "code": -32603, 00:09:39.493 "message": "Unable to find target foobar" 00:09:39.493 }' 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:39.493 { 00:09:39.493 "nqn": "nqn.2016-06.io.spdk:cnode13391", 00:09:39.493 "tgt_name": "foobar", 00:09:39.493 "method": "nvmf_create_subsystem", 00:09:39.493 "req_id": 1 00:09:39.493 } 00:09:39.493 Got JSON-RPC error response 00:09:39.493 response: 00:09:39.493 { 00:09:39.493 "code": -32603, 00:09:39.493 "message": "Unable to find target foobar" 00:09:39.493 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:39.493 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5491 00:09:39.752 [2024-07-15 12:55:01.445809] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5491: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:39.752 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:39.752 { 00:09:39.752 "nqn": "nqn.2016-06.io.spdk:cnode5491", 00:09:39.752 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.752 "method": "nvmf_create_subsystem", 00:09:39.752 "req_id": 1 00:09:39.752 } 00:09:39.752 Got JSON-RPC error response 00:09:39.752 response: 00:09:39.752 { 00:09:39.752 "code": -32602, 00:09:39.752 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.752 }' 00:09:39.752 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:39.752 { 00:09:39.752 "nqn": "nqn.2016-06.io.spdk:cnode5491", 00:09:39.752 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:39.752 "method": "nvmf_create_subsystem", 00:09:39.752 "req_id": 1 00:09:39.752 } 00:09:39.752 Got JSON-RPC error response 00:09:39.752 response: 00:09:39.752 { 00:09:39.752 "code": -32602, 00:09:39.752 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:39.752 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:39.752 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:39.752 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5000 00:09:40.012 [2024-07-15 12:55:01.622412] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5000: invalid model number 'SPDK_Controller' 00:09:40.012 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:40.012 { 00:09:40.012 "nqn": "nqn.2016-06.io.spdk:cnode5000", 00:09:40.012 "model_number": "SPDK_Controller\u001f", 00:09:40.012 "method": "nvmf_create_subsystem", 00:09:40.012 "req_id": 1 00:09:40.012 } 00:09:40.012 Got JSON-RPC error response 00:09:40.012 response: 00:09:40.012 { 00:09:40.012 "code": -32602, 00:09:40.012 "message": "Invalid MN SPDK_Controller\u001f" 00:09:40.012 }' 00:09:40.012 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:40.012 { 00:09:40.012 "nqn": "nqn.2016-06.io.spdk:cnode5000", 00:09:40.012 "model_number": "SPDK_Controller\u001f", 00:09:40.012 "method": "nvmf_create_subsystem", 00:09:40.012 "req_id": 1 00:09:40.012 } 00:09:40.012 Got JSON-RPC error response 00:09:40.012 response: 00:09:40.012 { 00:09:40.012 "code": -32602, 00:09:40.012 "message": "Invalid MN SPDK_Controller\u001f" 00:09:40.013 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';?Hx7'\''?_9V2'\''O}MOEVf'\''~' 00:09:40.013 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ';?Hx7'\''?_9V2'\''O}MOEVf'\''~' nqn.2016-06.io.spdk:cnode16458 00:09:40.274 [2024-07-15 12:55:01.959469] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16458: invalid serial number ';?Hx7'?_9V2'O}MOEVf'~' 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:40.274 { 00:09:40.274 "nqn": "nqn.2016-06.io.spdk:cnode16458", 00:09:40.274 "serial_number": ";?Hx7'\''?_9V2'\''O}MOEVf'\''~", 00:09:40.274 "method": "nvmf_create_subsystem", 00:09:40.274 "req_id": 1 00:09:40.274 } 00:09:40.274 Got JSON-RPC error response 00:09:40.274 response: 00:09:40.274 { 00:09:40.274 "code": -32602, 00:09:40.274 "message": "Invalid SN ;?Hx7'\''?_9V2'\''O}MOEVf'\''~" 00:09:40.274 }' 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:40.274 { 00:09:40.274 "nqn": "nqn.2016-06.io.spdk:cnode16458", 00:09:40.274 "serial_number": ";?Hx7'?_9V2'O}MOEVf'~", 00:09:40.274 "method": "nvmf_create_subsystem", 00:09:40.274 "req_id": 1 00:09:40.274 } 00:09:40.274 Got JSON-RPC error response 00:09:40.274 response: 00:09:40.274 { 00:09:40.274 "code": -32602, 00:09:40.274 "message": "Invalid SN ;?Hx7'?_9V2'O}MOEVf'~" 00:09:40.274 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:40.274 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:40.535 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '[a6~]^BQV' 00:09:40.536 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[a6~]^BQV' nqn.2016-06.io.spdk:cnode30624 00:09:40.796 [2024-07-15 12:55:02.445107] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30624: invalid model number '[a6~]^BQV' 00:09:40.796 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:40.796 { 00:09:40.796 "nqn": "nqn.2016-06.io.spdk:cnode30624", 00:09:40.796 "model_number": "[a6~]^BQV", 00:09:40.796 "method": "nvmf_create_subsystem", 00:09:40.796 "req_id": 1 00:09:40.796 } 00:09:40.796 Got JSON-RPC error response 00:09:40.796 response: 00:09:40.796 { 00:09:40.796 "code": -32602, 00:09:40.796 "message": "Invalid MN [a6~]^BQV" 00:09:40.796 }' 00:09:40.796 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:40.796 { 00:09:40.796 "nqn": "nqn.2016-06.io.spdk:cnode30624", 00:09:40.796 "model_number": "[a6~]^BQV", 00:09:40.796 "method": "nvmf_create_subsystem", 00:09:40.796 "req_id": 1 00:09:40.796 } 00:09:40.796 Got JSON-RPC error response 00:09:40.796 response: 00:09:40.796 { 00:09:40.796 "code": -32602, 00:09:40.796 "message": "Invalid MN [a6~]^BQV" 00:09:40.796 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:40.796 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:40.796 [2024-07-15 12:55:02.617747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:41.056 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:41.316 [2024-07-15 12:55:02.954786] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:41.316 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:41.316 { 00:09:41.316 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:41.316 "listen_address": { 00:09:41.316 "trtype": "tcp", 00:09:41.316 "traddr": "", 00:09:41.316 "trsvcid": "4421" 00:09:41.316 }, 00:09:41.316 "method": "nvmf_subsystem_remove_listener", 00:09:41.316 "req_id": 1 00:09:41.316 } 00:09:41.316 Got JSON-RPC error response 00:09:41.316 response: 00:09:41.316 { 00:09:41.316 "code": -32602, 00:09:41.316 "message": "Invalid parameters" 00:09:41.316 }' 00:09:41.316 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:41.316 { 00:09:41.316 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:41.316 "listen_address": { 00:09:41.316 "trtype": "tcp", 00:09:41.316 "traddr": "", 00:09:41.316 "trsvcid": "4421" 00:09:41.316 }, 00:09:41.316 "method": "nvmf_subsystem_remove_listener", 00:09:41.316 "req_id": 1 00:09:41.316 } 00:09:41.316 Got JSON-RPC error response 00:09:41.316 response: 00:09:41.316 { 00:09:41.316 "code": -32602, 00:09:41.316 "message": "Invalid parameters" 00:09:41.316 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:41.316 12:55:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30613 -i 0 00:09:41.316 [2024-07-15 12:55:03.131297] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30613: invalid cntlid range [0-65519] 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:41.576 { 00:09:41.576 "nqn": "nqn.2016-06.io.spdk:cnode30613", 00:09:41.576 "min_cntlid": 0, 00:09:41.576 "method": "nvmf_create_subsystem", 00:09:41.576 "req_id": 1 00:09:41.576 } 00:09:41.576 Got JSON-RPC error response 00:09:41.576 response: 00:09:41.576 { 00:09:41.576 "code": -32602, 00:09:41.576 "message": "Invalid cntlid range [0-65519]" 00:09:41.576 }' 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:41.576 { 00:09:41.576 "nqn": "nqn.2016-06.io.spdk:cnode30613", 00:09:41.576 "min_cntlid": 0, 00:09:41.576 "method": "nvmf_create_subsystem", 00:09:41.576 "req_id": 1 00:09:41.576 } 00:09:41.576 Got JSON-RPC error response 00:09:41.576 response: 00:09:41.576 { 00:09:41.576 "code": -32602, 00:09:41.576 "message": "Invalid cntlid range [0-65519]" 00:09:41.576 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28720 -i 65520 00:09:41.576 [2024-07-15 12:55:03.299818] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28720: invalid cntlid range [65520-65519] 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:41.576 { 00:09:41.576 "nqn": "nqn.2016-06.io.spdk:cnode28720", 00:09:41.576 "min_cntlid": 65520, 00:09:41.576 "method": "nvmf_create_subsystem", 00:09:41.576 "req_id": 1 00:09:41.576 } 00:09:41.576 Got JSON-RPC error response 00:09:41.576 response: 00:09:41.576 { 00:09:41.576 "code": -32602, 00:09:41.576 "message": "Invalid cntlid range [65520-65519]" 00:09:41.576 }' 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:41.576 { 00:09:41.576 "nqn": "nqn.2016-06.io.spdk:cnode28720", 00:09:41.576 "min_cntlid": 65520, 00:09:41.576 "method": "nvmf_create_subsystem", 00:09:41.576 "req_id": 1 00:09:41.576 } 00:09:41.576 Got JSON-RPC error response 00:09:41.576 response: 00:09:41.576 { 00:09:41.576 "code": -32602, 00:09:41.576 "message": "Invalid cntlid range [65520-65519]" 00:09:41.576 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.576 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23938 -I 0 00:09:41.835 [2024-07-15 12:55:03.468387] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23938: invalid cntlid range [1-0] 00:09:41.835 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:41.835 { 00:09:41.835 "nqn": "nqn.2016-06.io.spdk:cnode23938", 00:09:41.835 "max_cntlid": 0, 00:09:41.835 "method": "nvmf_create_subsystem", 00:09:41.835 "req_id": 1 00:09:41.835 } 00:09:41.835 Got JSON-RPC error response 00:09:41.835 response: 00:09:41.835 { 00:09:41.835 "code": -32602, 00:09:41.835 "message": "Invalid cntlid range [1-0]" 00:09:41.835 }' 00:09:41.835 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:41.835 { 00:09:41.835 "nqn": "nqn.2016-06.io.spdk:cnode23938", 00:09:41.835 "max_cntlid": 0, 00:09:41.835 "method": "nvmf_create_subsystem", 00:09:41.835 "req_id": 1 00:09:41.835 } 00:09:41.835 Got JSON-RPC error response 00:09:41.835 response: 00:09:41.835 { 00:09:41.835 "code": -32602, 00:09:41.835 "message": "Invalid cntlid range [1-0]" 00:09:41.835 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:41.835 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5263 -I 65520 00:09:41.835 [2024-07-15 12:55:03.640880] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5263: invalid cntlid range [1-65520] 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:42.095 { 00:09:42.095 "nqn": "nqn.2016-06.io.spdk:cnode5263", 00:09:42.095 "max_cntlid": 65520, 00:09:42.095 "method": "nvmf_create_subsystem", 00:09:42.095 "req_id": 1 00:09:42.095 } 00:09:42.095 Got JSON-RPC error response 00:09:42.095 response: 00:09:42.095 { 00:09:42.095 "code": -32602, 00:09:42.095 "message": "Invalid cntlid range [1-65520]" 00:09:42.095 }' 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:42.095 { 00:09:42.095 "nqn": "nqn.2016-06.io.spdk:cnode5263", 00:09:42.095 "max_cntlid": 65520, 00:09:42.095 "method": "nvmf_create_subsystem", 00:09:42.095 "req_id": 1 00:09:42.095 } 00:09:42.095 Got JSON-RPC error response 00:09:42.095 response: 00:09:42.095 { 00:09:42.095 "code": -32602, 00:09:42.095 "message": "Invalid cntlid range [1-65520]" 00:09:42.095 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15393 -i 6 -I 5 00:09:42.095 [2024-07-15 12:55:03.809417] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15393: invalid cntlid range [6-5] 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:42.095 { 00:09:42.095 "nqn": "nqn.2016-06.io.spdk:cnode15393", 00:09:42.095 "min_cntlid": 6, 00:09:42.095 "max_cntlid": 5, 00:09:42.095 "method": "nvmf_create_subsystem", 00:09:42.095 "req_id": 1 00:09:42.095 } 00:09:42.095 Got JSON-RPC error response 00:09:42.095 response: 00:09:42.095 { 00:09:42.095 "code": -32602, 00:09:42.095 "message": "Invalid cntlid range [6-5]" 00:09:42.095 }' 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:42.095 { 00:09:42.095 "nqn": "nqn.2016-06.io.spdk:cnode15393", 00:09:42.095 "min_cntlid": 6, 00:09:42.095 "max_cntlid": 5, 00:09:42.095 "method": "nvmf_create_subsystem", 00:09:42.095 "req_id": 1 00:09:42.095 } 00:09:42.095 Got JSON-RPC error response 00:09:42.095 response: 00:09:42.095 { 00:09:42.095 "code": -32602, 00:09:42.095 "message": "Invalid cntlid range [6-5]" 00:09:42.095 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:42.095 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:42.355 { 00:09:42.355 "name": "foobar", 00:09:42.355 "method": "nvmf_delete_target", 00:09:42.355 "req_id": 1 00:09:42.355 } 00:09:42.355 Got JSON-RPC error response 00:09:42.355 response: 00:09:42.355 { 00:09:42.355 "code": -32602, 00:09:42.355 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:42.355 }' 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:42.355 { 00:09:42.355 "name": "foobar", 00:09:42.355 "method": "nvmf_delete_target", 00:09:42.355 "req_id": 1 00:09:42.355 } 00:09:42.355 Got JSON-RPC error response 00:09:42.355 response: 00:09:42.355 { 00:09:42.355 "code": -32602, 00:09:42.355 "message": "The specified target doesn't exist, cannot delete it." 00:09:42.355 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.355 rmmod nvme_tcp 00:09:42.355 rmmod nvme_fabrics 00:09:42.355 rmmod nvme_keyring 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.355 12:55:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:42.355 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:42.355 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 535334 ']' 00:09:42.355 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 535334 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 535334 ']' 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 535334 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 535334 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 535334' 00:09:42.356 killing process with pid 535334 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 535334 00:09:42.356 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 535334 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.616 12:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.534 12:55:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.534 00:09:44.534 real 0m14.109s 00:09:44.534 user 0m19.373s 00:09:44.534 sys 0m6.743s 00:09:44.534 12:55:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.534 12:55:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:44.534 ************************************ 00:09:44.534 END TEST nvmf_invalid 00:09:44.534 ************************************ 00:09:44.534 12:55:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.534 12:55:06 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.534 12:55:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.534 12:55:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.534 12:55:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.534 ************************************ 00:09:44.534 START TEST nvmf_abort 00:09:44.534 ************************************ 00:09:44.534 12:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:44.795 * Looking for test storage... 00:09:44.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:44.795 12:55:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.964 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:52.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:52.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:52.965 Found net devices under 0000:31:00.0: cvl_0_0 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:52.965 Found net devices under 0000:31:00.1: cvl_0_1 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:09:52.965 00:09:52.965 --- 10.0.0.2 ping statistics --- 00:09:52.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.965 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:09:52.965 00:09:52.965 --- 10.0.0.1 ping statistics --- 00:09:52.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.965 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=541009 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 541009 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 541009 ']' 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.965 12:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:52.965 [2024-07-15 12:55:14.704304] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:09:52.965 [2024-07-15 12:55:14.704374] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.965 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.226 [2024-07-15 12:55:14.807815] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.226 [2024-07-15 12:55:14.899420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.226 [2024-07-15 12:55:14.899472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.226 [2024-07-15 12:55:14.899480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.226 [2024-07-15 12:55:14.899487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.226 [2024-07-15 12:55:14.899493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.226 [2024-07-15 12:55:14.899612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.226 [2024-07-15 12:55:14.899771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.226 [2024-07-15 12:55:14.899772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.798 [2024-07-15 12:55:15.537150] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.798 Malloc0 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.798 Delay0 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:53.798 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.799 [2024-07-15 12:55:15.613709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.799 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:54.059 12:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.059 12:55:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:54.059 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.059 [2024-07-15 12:55:15.723809] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:56.039 Initializing NVMe Controllers 00:09:56.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:56.039 controller IO queue size 128 less than required 00:09:56.039 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:56.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:56.039 Initialization complete. Launching workers. 00:09:56.039 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34016 00:09:56.039 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34077, failed to submit 62 00:09:56.039 success 34020, unsuccess 57, failed 0 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.039 rmmod nvme_tcp 00:09:56.039 rmmod nvme_fabrics 00:09:56.039 rmmod nvme_keyring 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 541009 ']' 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 541009 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 541009 ']' 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 541009 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.039 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 541009 00:09:56.299 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:56.299 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:56.299 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 541009' 00:09:56.299 killing process with pid 541009 00:09:56.299 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 541009 00:09:56.299 12:55:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 541009 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.299 12:55:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.843 12:55:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:58.843 00:09:58.843 real 0m13.748s 00:09:58.843 user 0m13.479s 00:09:58.843 sys 0m6.827s 00:09:58.843 12:55:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.843 12:55:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 ************************************ 00:09:58.843 END TEST nvmf_abort 00:09:58.843 ************************************ 00:09:58.843 12:55:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:58.843 12:55:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:58.843 12:55:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:58.843 12:55:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.843 12:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.843 ************************************ 00:09:58.843 START TEST nvmf_ns_hotplug_stress 00:09:58.843 ************************************ 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:58.843 * Looking for test storage... 00:09:58.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:58.843 12:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.983 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:06.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:06.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:06.984 Found net devices under 0000:31:00.0: cvl_0_0 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:06.984 Found net devices under 0000:31:00.1: cvl_0_1 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.984 12:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:10:06.984 00:10:06.984 --- 10.0.0.2 ping statistics --- 00:10:06.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.984 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:10:06.984 00:10:06.984 --- 10.0.0.1 ping statistics --- 00:10:06.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.984 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=546265 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 546265 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 546265 ']' 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.984 12:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.984 [2024-07-15 12:55:28.262687] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:10:06.985 [2024-07-15 12:55:28.262754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.985 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.985 [2024-07-15 12:55:28.358595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.985 [2024-07-15 12:55:28.452624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.985 [2024-07-15 12:55:28.452685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.985 [2024-07-15 12:55:28.452694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.985 [2024-07-15 12:55:28.452700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.985 [2024-07-15 12:55:28.452706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.985 [2024-07-15 12:55:28.452841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.985 [2024-07-15 12:55:28.453004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.985 [2024-07-15 12:55:28.453005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.245 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.245 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:07.245 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.245 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.245 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.506 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.506 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:07.506 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.506 [2024-07-15 12:55:29.219235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.506 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.766 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.766 [2024-07-15 12:55:29.552601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.766 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.027 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:08.286 Malloc0 00:10:08.286 12:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.286 Delay0 00:10:08.286 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.547 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:08.807 NULL1 00:10:08.807 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:08.807 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=546933 00:10:08.807 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:08.807 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:08.807 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.807 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.067 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.067 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:09.067 12:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:09.327 [2024-07-15 12:55:31.016136] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:10:09.327 true 00:10:09.327 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:09.327 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.601 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.601 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:09.601 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:09.860 true 00:10:09.860 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:09.860 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.860 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.120 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:10.120 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:10.380 true 00:10:10.380 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:10.380 12:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.380 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.639 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:10.639 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:10.639 true 00:10:10.900 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:10.900 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.900 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.160 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:11.161 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:11.161 true 00:10:11.161 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:11.161 12:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.421 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.682 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:11.682 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:11.682 true 00:10:11.682 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:11.682 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.942 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.202 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:12.202 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:12.202 true 00:10:12.202 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:12.202 12:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.142 Read completed with error (sct=0, sc=11) 00:10:13.143 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.143 12:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.403 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:13.403 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:13.403 true 00:10:13.663 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:13.663 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.663 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.923 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:13.923 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:13.923 true 00:10:14.184 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:14.184 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.184 12:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.445 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:14.445 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:14.445 true 00:10:14.445 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:14.445 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.705 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.966 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:14.966 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:14.966 true 00:10:14.966 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:14.966 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.227 12:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.488 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:15.488 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:15.488 true 00:10:15.488 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:15.488 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.748 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.748 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:15.748 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:16.008 true 00:10:16.008 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:16.008 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.267 12:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.267 12:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:16.267 12:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:16.526 true 00:10:16.526 12:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:16.526 12:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.466 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.466 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:17.466 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:17.726 true 00:10:17.726 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:17.726 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.987 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.987 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:17.987 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:18.246 true 00:10:18.246 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:18.246 12:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.506 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.506 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:18.506 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:18.766 true 00:10:18.766 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:18.766 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.026 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.026 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:19.026 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:19.287 true 00:10:19.287 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:19.287 12:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.287 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.547 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:19.547 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:19.808 true 00:10:19.808 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:19.808 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.808 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.068 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:20.068 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:20.068 true 00:10:20.328 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:20.328 12:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.328 12:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.589 12:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:20.589 12:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:20.589 true 00:10:20.589 12:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:20.589 12:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.540 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.808 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:21.808 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:21.808 true 00:10:21.808 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:21.808 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.069 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.330 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:22.330 12:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:22.330 true 00:10:22.330 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:22.330 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.590 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.849 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:22.849 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:22.849 true 00:10:22.849 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:22.849 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.109 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.370 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:23.370 12:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:23.370 true 00:10:23.370 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:23.370 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.630 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.630 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:23.630 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:23.890 true 00:10:23.890 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:23.890 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.150 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.150 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:24.150 12:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:24.411 true 00:10:24.411 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:24.411 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.674 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.674 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:24.674 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:24.963 true 00:10:24.963 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:24.963 12:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.928 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.928 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:25.928 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:26.188 true 00:10:26.189 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:26.189 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.189 12:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.484 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:26.484 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:26.484 true 00:10:26.745 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:26.745 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.745 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.005 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:27.005 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:27.005 true 00:10:27.005 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:27.005 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.267 12:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.527 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:27.527 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:27.527 true 00:10:27.527 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:27.527 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.787 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.787 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:27.787 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:28.047 true 00:10:28.047 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:28.047 12:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.989 12:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.251 12:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:29.251 12:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:29.251 true 00:10:29.251 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:29.251 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.512 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.773 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:29.773 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:29.773 true 00:10:29.773 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:29.773 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.034 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.295 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:30.295 12:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:30.295 true 00:10:30.295 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:30.295 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.556 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.556 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:30.556 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:30.849 true 00:10:30.849 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:30.849 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.110 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.110 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:31.110 12:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:31.371 true 00:10:31.371 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:31.371 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.631 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.631 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:31.631 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:31.892 true 00:10:31.892 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:31.892 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.892 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.154 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:32.154 12:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:32.415 true 00:10:32.415 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:32.415 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.415 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.676 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:32.676 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:32.938 true 00:10:32.938 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:32.938 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.938 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.198 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:33.198 12:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:33.198 true 00:10:33.458 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:33.458 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.458 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.719 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:33.719 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:33.719 true 00:10:33.719 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:33.719 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.979 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.239 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:34.239 12:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:34.239 true 00:10:34.239 12:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:34.239 12:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.182 12:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.443 12:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:35.443 12:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:35.443 true 00:10:35.704 12:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:35.704 12:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.648 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.648 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:36.648 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:36.909 true 00:10:36.909 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:36.909 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.909 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.169 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:37.169 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:37.169 true 00:10:37.170 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:37.170 12:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.431 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.693 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:37.693 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:37.693 true 00:10:37.693 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:37.693 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.954 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.215 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:38.215 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:38.215 true 00:10:38.215 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:38.215 12:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.476 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.737 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:38.737 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:38.737 true 00:10:38.737 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:38.737 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.997 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.997 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:38.997 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:39.259 Initializing NVMe Controllers 00:10:39.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.259 Controller IO queue size 128, less than required. 00:10:39.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.259 Controller IO queue size 128, less than required. 00:10:39.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:39.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:39.259 Initialization complete. Launching workers. 00:10:39.259 ======================================================== 00:10:39.259 Latency(us) 00:10:39.259 Device Information : IOPS MiB/s Average min max 00:10:39.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 340.84 0.17 83851.79 2184.90 1159659.32 00:10:39.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6925.05 3.38 18422.31 1623.71 404106.01 00:10:39.259 ======================================================== 00:10:39.259 Total : 7265.89 3.55 21491.61 1623.71 1159659.32 00:10:39.259 00:10:39.259 true 00:10:39.259 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 546933 00:10:39.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (546933) - No such process 00:10:39.259 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 546933 00:10:39.259 12:56:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.520 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:39.780 null0 00:10:39.780 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:39.780 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:39.780 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:40.040 null1 00:10:40.040 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.040 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.040 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:40.040 null2 00:10:40.040 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.040 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.041 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:40.301 null3 00:10:40.301 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.301 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.301 12:56:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:40.301 null4 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:40.562 null5 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.562 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:40.822 null6 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:40.822 null7 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:40.822 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.083 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 553387 553389 553392 553395 553398 553401 553402 553405 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.084 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.346 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.346 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.629 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.630 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:41.894 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.155 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.415 12:56:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.415 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:42.675 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:42.936 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.200 12:56:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.200 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.200 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.200 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.200 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.462 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:43.723 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:43.985 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:44.246 12:56:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.246 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:44.246 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.246 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.247 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.247 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.247 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:44.247 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.247 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.508 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.768 rmmod nvme_tcp 00:10:44.768 rmmod nvme_fabrics 00:10:44.768 rmmod nvme_keyring 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 546265 ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 546265 ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 546265' 00:10:44.768 killing process with pid 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 546265 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.768 12:56:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.311 12:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.311 00:10:47.311 real 0m48.462s 00:10:47.311 user 3m12.064s 00:10:47.311 sys 0m15.716s 00:10:47.311 12:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.312 12:56:08 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.312 ************************************ 00:10:47.312 END TEST nvmf_ns_hotplug_stress 00:10:47.312 ************************************ 00:10:47.312 12:56:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:47.312 12:56:08 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:47.312 12:56:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.312 12:56:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.312 12:56:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.312 ************************************ 00:10:47.312 START TEST nvmf_connect_stress 00:10:47.312 ************************************ 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:47.312 * Looking for test storage... 00:10:47.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.312 12:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:55.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:55.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:55.555 Found net devices under 0000:31:00.0: cvl_0_0 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:55.555 Found net devices under 0000:31:00.1: cvl_0_1 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:10:55.555 00:10:55.555 --- 10.0.0.2 ping statistics --- 00:10:55.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.555 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:10:55.555 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:10:55.556 00:10:55.556 --- 10.0.0.1 ping statistics --- 00:10:55.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.556 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.556 12:56:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=558966 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 558966 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 558966 ']' 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.556 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.556 [2024-07-15 12:56:17.072613] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:10:55.556 [2024-07-15 12:56:17.072730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.556 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.556 [2024-07-15 12:56:17.173143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.556 [2024-07-15 12:56:17.266038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.556 [2024-07-15 12:56:17.266101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.556 [2024-07-15 12:56:17.266109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.556 [2024-07-15 12:56:17.266116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.556 [2024-07-15 12:56:17.266123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.556 [2024-07-15 12:56:17.266287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.556 [2024-07-15 12:56:17.266486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.556 [2024-07-15 12:56:17.266485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 [2024-07-15 12:56:17.896265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 [2024-07-15 12:56:17.920634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.137 NULL1 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=559060 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.137 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.398 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.659 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.659 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:56.659 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.659 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.659 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.919 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.919 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:56.919 12:56:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.919 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.919 12:56:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.490 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.490 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:57.490 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.490 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.490 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.749 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.749 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:57.749 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.749 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.749 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.009 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.009 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:58.009 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.009 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.009 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.270 12:56:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.270 12:56:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:58.270 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.270 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.270 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.530 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.530 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:58.530 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.530 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.530 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.100 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.100 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:59.100 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.100 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.100 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.359 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.359 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:59.359 12:56:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.359 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.359 12:56:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.619 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.619 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:59.619 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.619 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.619 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.879 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:10:59.879 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.879 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.879 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.139 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.139 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:00.139 12:56:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.139 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.139 12:56:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.709 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.709 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:00.709 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.709 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.709 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.969 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.969 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:00.969 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.969 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.969 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.229 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.229 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:01.229 12:56:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.229 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.229 12:56:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.494 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.494 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:01.494 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.494 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.494 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.758 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.758 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:01.758 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.758 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.758 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.329 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.329 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:02.329 12:56:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.329 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.329 12:56:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.589 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.589 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:02.589 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.589 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.589 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.850 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.850 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:02.850 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.850 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.850 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.110 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.110 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:03.110 12:56:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.110 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.110 12:56:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.681 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.681 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:03.681 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.681 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.681 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.941 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.941 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:03.941 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.941 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.941 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.200 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.200 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:04.200 12:56:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.200 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.200 12:56:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.460 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.460 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:04.460 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.460 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.460 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.721 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.721 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:04.721 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.721 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.721 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.291 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.291 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:05.291 12:56:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.291 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.291 12:56:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.552 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.552 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:05.552 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.552 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.552 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.812 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.812 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:05.812 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.812 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.812 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.071 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:06.071 12:56:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.071 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.071 12:56:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.330 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 559060 00:11:06.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (559060) - No such process 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 559060 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.330 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.330 rmmod nvme_tcp 00:11:06.589 rmmod nvme_fabrics 00:11:06.589 rmmod nvme_keyring 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 558966 ']' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 558966 ']' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 558966' 00:11:06.589 killing process with pid 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 558966 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.589 12:56:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.129 12:56:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.129 00:11:09.129 real 0m21.761s 00:11:09.129 user 0m42.452s 00:11:09.129 sys 0m9.342s 00:11:09.129 12:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.129 12:56:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.129 ************************************ 00:11:09.129 END TEST nvmf_connect_stress 00:11:09.129 ************************************ 00:11:09.129 12:56:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.129 12:56:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:09.129 12:56:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.129 12:56:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.129 12:56:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.129 ************************************ 00:11:09.129 START TEST nvmf_fused_ordering 00:11:09.129 ************************************ 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:09.129 * Looking for test storage... 00:11:09.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.129 12:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.130 12:56:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:17.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:17.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:17.272 Found net devices under 0000:31:00.0: cvl_0_0 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:17.272 Found net devices under 0000:31:00.1: cvl_0_1 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:17.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:11:17.272 00:11:17.272 --- 10.0.0.2 ping statistics --- 00:11:17.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.272 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:17.272 00:11:17.272 --- 10.0.0.1 ping statistics --- 00:11:17.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.272 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=565829 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 565829 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 565829 ']' 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:17.273 12:56:38 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:17.273 [2024-07-15 12:56:38.911336] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:11:17.273 [2024-07-15 12:56:38.911416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.273 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.273 [2024-07-15 12:56:39.006756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.534 [2024-07-15 12:56:39.101908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.534 [2024-07-15 12:56:39.101962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.534 [2024-07-15 12:56:39.101970] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.534 [2024-07-15 12:56:39.101977] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.534 [2024-07-15 12:56:39.101983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.534 [2024-07-15 12:56:39.102009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 [2024-07-15 12:56:39.711560] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 [2024-07-15 12:56:39.727683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 NULL1 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.105 12:56:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:18.105 [2024-07-15 12:56:39.783422] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:11:18.105 [2024-07-15 12:56:39.783461] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566052 ] 00:11:18.105 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.366 Attached to nqn.2016-06.io.spdk:cnode1 00:11:18.366 Namespace ID: 1 size: 1GB 00:11:18.366 fused_ordering(0) 00:11:18.366 fused_ordering(1) 00:11:18.366 fused_ordering(2) 00:11:18.366 fused_ordering(3) 00:11:18.366 fused_ordering(4) 00:11:18.366 fused_ordering(5) 00:11:18.366 fused_ordering(6) 00:11:18.366 fused_ordering(7) 00:11:18.366 fused_ordering(8) 00:11:18.366 fused_ordering(9) 00:11:18.366 fused_ordering(10) 00:11:18.366 fused_ordering(11) 00:11:18.366 fused_ordering(12) 00:11:18.366 fused_ordering(13) 00:11:18.366 fused_ordering(14) 00:11:18.366 fused_ordering(15) 00:11:18.366 fused_ordering(16) 00:11:18.366 fused_ordering(17) 00:11:18.366 fused_ordering(18) 00:11:18.366 fused_ordering(19) 00:11:18.366 fused_ordering(20) 00:11:18.366 fused_ordering(21) 00:11:18.366 fused_ordering(22) 00:11:18.366 fused_ordering(23) 00:11:18.366 fused_ordering(24) 00:11:18.366 fused_ordering(25) 00:11:18.366 fused_ordering(26) 00:11:18.366 fused_ordering(27) 00:11:18.366 fused_ordering(28) 00:11:18.366 fused_ordering(29) 00:11:18.366 fused_ordering(30) 00:11:18.366 fused_ordering(31) 00:11:18.366 fused_ordering(32) 00:11:18.366 fused_ordering(33) 00:11:18.366 fused_ordering(34) 00:11:18.366 fused_ordering(35) 00:11:18.366 fused_ordering(36) 00:11:18.366 fused_ordering(37) 00:11:18.366 fused_ordering(38) 00:11:18.366 fused_ordering(39) 00:11:18.366 fused_ordering(40) 00:11:18.366 fused_ordering(41) 00:11:18.366 fused_ordering(42) 00:11:18.366 fused_ordering(43) 00:11:18.366 fused_ordering(44) 00:11:18.366 fused_ordering(45) 00:11:18.366 fused_ordering(46) 00:11:18.366 fused_ordering(47) 00:11:18.366 fused_ordering(48) 00:11:18.366 fused_ordering(49) 00:11:18.366 fused_ordering(50) 00:11:18.366 fused_ordering(51) 00:11:18.366 fused_ordering(52) 00:11:18.366 fused_ordering(53) 00:11:18.366 fused_ordering(54) 00:11:18.366 fused_ordering(55) 00:11:18.366 fused_ordering(56) 00:11:18.366 fused_ordering(57) 00:11:18.366 fused_ordering(58) 00:11:18.366 fused_ordering(59) 00:11:18.366 fused_ordering(60) 00:11:18.366 fused_ordering(61) 00:11:18.366 fused_ordering(62) 00:11:18.366 fused_ordering(63) 00:11:18.366 fused_ordering(64) 00:11:18.366 fused_ordering(65) 00:11:18.366 fused_ordering(66) 00:11:18.366 fused_ordering(67) 00:11:18.366 fused_ordering(68) 00:11:18.366 fused_ordering(69) 00:11:18.366 fused_ordering(70) 00:11:18.366 fused_ordering(71) 00:11:18.366 fused_ordering(72) 00:11:18.366 fused_ordering(73) 00:11:18.366 fused_ordering(74) 00:11:18.366 fused_ordering(75) 00:11:18.366 fused_ordering(76) 00:11:18.366 fused_ordering(77) 00:11:18.366 fused_ordering(78) 00:11:18.366 fused_ordering(79) 00:11:18.366 fused_ordering(80) 00:11:18.366 fused_ordering(81) 00:11:18.366 fused_ordering(82) 00:11:18.366 fused_ordering(83) 00:11:18.366 fused_ordering(84) 00:11:18.366 fused_ordering(85) 00:11:18.366 fused_ordering(86) 00:11:18.366 fused_ordering(87) 00:11:18.366 fused_ordering(88) 00:11:18.366 fused_ordering(89) 00:11:18.366 fused_ordering(90) 00:11:18.366 fused_ordering(91) 00:11:18.366 fused_ordering(92) 00:11:18.366 fused_ordering(93) 00:11:18.366 fused_ordering(94) 00:11:18.366 fused_ordering(95) 00:11:18.366 fused_ordering(96) 00:11:18.366 fused_ordering(97) 00:11:18.366 fused_ordering(98) 00:11:18.366 fused_ordering(99) 00:11:18.366 fused_ordering(100) 00:11:18.366 fused_ordering(101) 00:11:18.366 fused_ordering(102) 00:11:18.366 fused_ordering(103) 00:11:18.366 fused_ordering(104) 00:11:18.366 fused_ordering(105) 00:11:18.366 fused_ordering(106) 00:11:18.366 fused_ordering(107) 00:11:18.366 fused_ordering(108) 00:11:18.366 fused_ordering(109) 00:11:18.366 fused_ordering(110) 00:11:18.366 fused_ordering(111) 00:11:18.366 fused_ordering(112) 00:11:18.366 fused_ordering(113) 00:11:18.366 fused_ordering(114) 00:11:18.366 fused_ordering(115) 00:11:18.366 fused_ordering(116) 00:11:18.366 fused_ordering(117) 00:11:18.366 fused_ordering(118) 00:11:18.366 fused_ordering(119) 00:11:18.366 fused_ordering(120) 00:11:18.366 fused_ordering(121) 00:11:18.366 fused_ordering(122) 00:11:18.366 fused_ordering(123) 00:11:18.366 fused_ordering(124) 00:11:18.366 fused_ordering(125) 00:11:18.366 fused_ordering(126) 00:11:18.366 fused_ordering(127) 00:11:18.366 fused_ordering(128) 00:11:18.366 fused_ordering(129) 00:11:18.366 fused_ordering(130) 00:11:18.366 fused_ordering(131) 00:11:18.366 fused_ordering(132) 00:11:18.366 fused_ordering(133) 00:11:18.366 fused_ordering(134) 00:11:18.366 fused_ordering(135) 00:11:18.366 fused_ordering(136) 00:11:18.366 fused_ordering(137) 00:11:18.366 fused_ordering(138) 00:11:18.366 fused_ordering(139) 00:11:18.366 fused_ordering(140) 00:11:18.366 fused_ordering(141) 00:11:18.366 fused_ordering(142) 00:11:18.366 fused_ordering(143) 00:11:18.366 fused_ordering(144) 00:11:18.366 fused_ordering(145) 00:11:18.366 fused_ordering(146) 00:11:18.366 fused_ordering(147) 00:11:18.366 fused_ordering(148) 00:11:18.366 fused_ordering(149) 00:11:18.366 fused_ordering(150) 00:11:18.366 fused_ordering(151) 00:11:18.366 fused_ordering(152) 00:11:18.366 fused_ordering(153) 00:11:18.366 fused_ordering(154) 00:11:18.366 fused_ordering(155) 00:11:18.366 fused_ordering(156) 00:11:18.366 fused_ordering(157) 00:11:18.366 fused_ordering(158) 00:11:18.366 fused_ordering(159) 00:11:18.366 fused_ordering(160) 00:11:18.366 fused_ordering(161) 00:11:18.366 fused_ordering(162) 00:11:18.366 fused_ordering(163) 00:11:18.366 fused_ordering(164) 00:11:18.366 fused_ordering(165) 00:11:18.366 fused_ordering(166) 00:11:18.366 fused_ordering(167) 00:11:18.366 fused_ordering(168) 00:11:18.366 fused_ordering(169) 00:11:18.366 fused_ordering(170) 00:11:18.366 fused_ordering(171) 00:11:18.366 fused_ordering(172) 00:11:18.366 fused_ordering(173) 00:11:18.366 fused_ordering(174) 00:11:18.366 fused_ordering(175) 00:11:18.366 fused_ordering(176) 00:11:18.366 fused_ordering(177) 00:11:18.366 fused_ordering(178) 00:11:18.366 fused_ordering(179) 00:11:18.366 fused_ordering(180) 00:11:18.366 fused_ordering(181) 00:11:18.366 fused_ordering(182) 00:11:18.366 fused_ordering(183) 00:11:18.366 fused_ordering(184) 00:11:18.366 fused_ordering(185) 00:11:18.366 fused_ordering(186) 00:11:18.366 fused_ordering(187) 00:11:18.366 fused_ordering(188) 00:11:18.366 fused_ordering(189) 00:11:18.366 fused_ordering(190) 00:11:18.366 fused_ordering(191) 00:11:18.366 fused_ordering(192) 00:11:18.366 fused_ordering(193) 00:11:18.366 fused_ordering(194) 00:11:18.366 fused_ordering(195) 00:11:18.366 fused_ordering(196) 00:11:18.366 fused_ordering(197) 00:11:18.366 fused_ordering(198) 00:11:18.366 fused_ordering(199) 00:11:18.366 fused_ordering(200) 00:11:18.366 fused_ordering(201) 00:11:18.366 fused_ordering(202) 00:11:18.366 fused_ordering(203) 00:11:18.366 fused_ordering(204) 00:11:18.366 fused_ordering(205) 00:11:18.937 fused_ordering(206) 00:11:18.937 fused_ordering(207) 00:11:18.937 fused_ordering(208) 00:11:18.937 fused_ordering(209) 00:11:18.937 fused_ordering(210) 00:11:18.937 fused_ordering(211) 00:11:18.937 fused_ordering(212) 00:11:18.937 fused_ordering(213) 00:11:18.937 fused_ordering(214) 00:11:18.937 fused_ordering(215) 00:11:18.937 fused_ordering(216) 00:11:18.937 fused_ordering(217) 00:11:18.937 fused_ordering(218) 00:11:18.937 fused_ordering(219) 00:11:18.937 fused_ordering(220) 00:11:18.937 fused_ordering(221) 00:11:18.937 fused_ordering(222) 00:11:18.937 fused_ordering(223) 00:11:18.937 fused_ordering(224) 00:11:18.937 fused_ordering(225) 00:11:18.937 fused_ordering(226) 00:11:18.937 fused_ordering(227) 00:11:18.937 fused_ordering(228) 00:11:18.937 fused_ordering(229) 00:11:18.937 fused_ordering(230) 00:11:18.937 fused_ordering(231) 00:11:18.937 fused_ordering(232) 00:11:18.937 fused_ordering(233) 00:11:18.937 fused_ordering(234) 00:11:18.937 fused_ordering(235) 00:11:18.937 fused_ordering(236) 00:11:18.937 fused_ordering(237) 00:11:18.937 fused_ordering(238) 00:11:18.937 fused_ordering(239) 00:11:18.937 fused_ordering(240) 00:11:18.937 fused_ordering(241) 00:11:18.937 fused_ordering(242) 00:11:18.937 fused_ordering(243) 00:11:18.937 fused_ordering(244) 00:11:18.937 fused_ordering(245) 00:11:18.937 fused_ordering(246) 00:11:18.937 fused_ordering(247) 00:11:18.937 fused_ordering(248) 00:11:18.937 fused_ordering(249) 00:11:18.937 fused_ordering(250) 00:11:18.937 fused_ordering(251) 00:11:18.937 fused_ordering(252) 00:11:18.937 fused_ordering(253) 00:11:18.937 fused_ordering(254) 00:11:18.937 fused_ordering(255) 00:11:18.937 fused_ordering(256) 00:11:18.937 fused_ordering(257) 00:11:18.937 fused_ordering(258) 00:11:18.937 fused_ordering(259) 00:11:18.937 fused_ordering(260) 00:11:18.937 fused_ordering(261) 00:11:18.937 fused_ordering(262) 00:11:18.937 fused_ordering(263) 00:11:18.937 fused_ordering(264) 00:11:18.937 fused_ordering(265) 00:11:18.937 fused_ordering(266) 00:11:18.937 fused_ordering(267) 00:11:18.937 fused_ordering(268) 00:11:18.937 fused_ordering(269) 00:11:18.937 fused_ordering(270) 00:11:18.937 fused_ordering(271) 00:11:18.937 fused_ordering(272) 00:11:18.937 fused_ordering(273) 00:11:18.937 fused_ordering(274) 00:11:18.937 fused_ordering(275) 00:11:18.937 fused_ordering(276) 00:11:18.937 fused_ordering(277) 00:11:18.937 fused_ordering(278) 00:11:18.937 fused_ordering(279) 00:11:18.937 fused_ordering(280) 00:11:18.937 fused_ordering(281) 00:11:18.937 fused_ordering(282) 00:11:18.937 fused_ordering(283) 00:11:18.937 fused_ordering(284) 00:11:18.937 fused_ordering(285) 00:11:18.937 fused_ordering(286) 00:11:18.937 fused_ordering(287) 00:11:18.937 fused_ordering(288) 00:11:18.938 fused_ordering(289) 00:11:18.938 fused_ordering(290) 00:11:18.938 fused_ordering(291) 00:11:18.938 fused_ordering(292) 00:11:18.938 fused_ordering(293) 00:11:18.938 fused_ordering(294) 00:11:18.938 fused_ordering(295) 00:11:18.938 fused_ordering(296) 00:11:18.938 fused_ordering(297) 00:11:18.938 fused_ordering(298) 00:11:18.938 fused_ordering(299) 00:11:18.938 fused_ordering(300) 00:11:18.938 fused_ordering(301) 00:11:18.938 fused_ordering(302) 00:11:18.938 fused_ordering(303) 00:11:18.938 fused_ordering(304) 00:11:18.938 fused_ordering(305) 00:11:18.938 fused_ordering(306) 00:11:18.938 fused_ordering(307) 00:11:18.938 fused_ordering(308) 00:11:18.938 fused_ordering(309) 00:11:18.938 fused_ordering(310) 00:11:18.938 fused_ordering(311) 00:11:18.938 fused_ordering(312) 00:11:18.938 fused_ordering(313) 00:11:18.938 fused_ordering(314) 00:11:18.938 fused_ordering(315) 00:11:18.938 fused_ordering(316) 00:11:18.938 fused_ordering(317) 00:11:18.938 fused_ordering(318) 00:11:18.938 fused_ordering(319) 00:11:18.938 fused_ordering(320) 00:11:18.938 fused_ordering(321) 00:11:18.938 fused_ordering(322) 00:11:18.938 fused_ordering(323) 00:11:18.938 fused_ordering(324) 00:11:18.938 fused_ordering(325) 00:11:18.938 fused_ordering(326) 00:11:18.938 fused_ordering(327) 00:11:18.938 fused_ordering(328) 00:11:18.938 fused_ordering(329) 00:11:18.938 fused_ordering(330) 00:11:18.938 fused_ordering(331) 00:11:18.938 fused_ordering(332) 00:11:18.938 fused_ordering(333) 00:11:18.938 fused_ordering(334) 00:11:18.938 fused_ordering(335) 00:11:18.938 fused_ordering(336) 00:11:18.938 fused_ordering(337) 00:11:18.938 fused_ordering(338) 00:11:18.938 fused_ordering(339) 00:11:18.938 fused_ordering(340) 00:11:18.938 fused_ordering(341) 00:11:18.938 fused_ordering(342) 00:11:18.938 fused_ordering(343) 00:11:18.938 fused_ordering(344) 00:11:18.938 fused_ordering(345) 00:11:18.938 fused_ordering(346) 00:11:18.938 fused_ordering(347) 00:11:18.938 fused_ordering(348) 00:11:18.938 fused_ordering(349) 00:11:18.938 fused_ordering(350) 00:11:18.938 fused_ordering(351) 00:11:18.938 fused_ordering(352) 00:11:18.938 fused_ordering(353) 00:11:18.938 fused_ordering(354) 00:11:18.938 fused_ordering(355) 00:11:18.938 fused_ordering(356) 00:11:18.938 fused_ordering(357) 00:11:18.938 fused_ordering(358) 00:11:18.938 fused_ordering(359) 00:11:18.938 fused_ordering(360) 00:11:18.938 fused_ordering(361) 00:11:18.938 fused_ordering(362) 00:11:18.938 fused_ordering(363) 00:11:18.938 fused_ordering(364) 00:11:18.938 fused_ordering(365) 00:11:18.938 fused_ordering(366) 00:11:18.938 fused_ordering(367) 00:11:18.938 fused_ordering(368) 00:11:18.938 fused_ordering(369) 00:11:18.938 fused_ordering(370) 00:11:18.938 fused_ordering(371) 00:11:18.938 fused_ordering(372) 00:11:18.938 fused_ordering(373) 00:11:18.938 fused_ordering(374) 00:11:18.938 fused_ordering(375) 00:11:18.938 fused_ordering(376) 00:11:18.938 fused_ordering(377) 00:11:18.938 fused_ordering(378) 00:11:18.938 fused_ordering(379) 00:11:18.938 fused_ordering(380) 00:11:18.938 fused_ordering(381) 00:11:18.938 fused_ordering(382) 00:11:18.938 fused_ordering(383) 00:11:18.938 fused_ordering(384) 00:11:18.938 fused_ordering(385) 00:11:18.938 fused_ordering(386) 00:11:18.938 fused_ordering(387) 00:11:18.938 fused_ordering(388) 00:11:18.938 fused_ordering(389) 00:11:18.938 fused_ordering(390) 00:11:18.938 fused_ordering(391) 00:11:18.938 fused_ordering(392) 00:11:18.938 fused_ordering(393) 00:11:18.938 fused_ordering(394) 00:11:18.938 fused_ordering(395) 00:11:18.938 fused_ordering(396) 00:11:18.938 fused_ordering(397) 00:11:18.938 fused_ordering(398) 00:11:18.938 fused_ordering(399) 00:11:18.938 fused_ordering(400) 00:11:18.938 fused_ordering(401) 00:11:18.938 fused_ordering(402) 00:11:18.938 fused_ordering(403) 00:11:18.938 fused_ordering(404) 00:11:18.938 fused_ordering(405) 00:11:18.938 fused_ordering(406) 00:11:18.938 fused_ordering(407) 00:11:18.938 fused_ordering(408) 00:11:18.938 fused_ordering(409) 00:11:18.938 fused_ordering(410) 00:11:19.199 fused_ordering(411) 00:11:19.199 fused_ordering(412) 00:11:19.199 fused_ordering(413) 00:11:19.199 fused_ordering(414) 00:11:19.199 fused_ordering(415) 00:11:19.199 fused_ordering(416) 00:11:19.199 fused_ordering(417) 00:11:19.199 fused_ordering(418) 00:11:19.199 fused_ordering(419) 00:11:19.199 fused_ordering(420) 00:11:19.199 fused_ordering(421) 00:11:19.199 fused_ordering(422) 00:11:19.199 fused_ordering(423) 00:11:19.199 fused_ordering(424) 00:11:19.199 fused_ordering(425) 00:11:19.199 fused_ordering(426) 00:11:19.199 fused_ordering(427) 00:11:19.199 fused_ordering(428) 00:11:19.199 fused_ordering(429) 00:11:19.199 fused_ordering(430) 00:11:19.199 fused_ordering(431) 00:11:19.199 fused_ordering(432) 00:11:19.199 fused_ordering(433) 00:11:19.199 fused_ordering(434) 00:11:19.199 fused_ordering(435) 00:11:19.199 fused_ordering(436) 00:11:19.199 fused_ordering(437) 00:11:19.199 fused_ordering(438) 00:11:19.199 fused_ordering(439) 00:11:19.199 fused_ordering(440) 00:11:19.199 fused_ordering(441) 00:11:19.199 fused_ordering(442) 00:11:19.199 fused_ordering(443) 00:11:19.199 fused_ordering(444) 00:11:19.199 fused_ordering(445) 00:11:19.199 fused_ordering(446) 00:11:19.199 fused_ordering(447) 00:11:19.199 fused_ordering(448) 00:11:19.199 fused_ordering(449) 00:11:19.199 fused_ordering(450) 00:11:19.199 fused_ordering(451) 00:11:19.199 fused_ordering(452) 00:11:19.199 fused_ordering(453) 00:11:19.199 fused_ordering(454) 00:11:19.199 fused_ordering(455) 00:11:19.199 fused_ordering(456) 00:11:19.199 fused_ordering(457) 00:11:19.199 fused_ordering(458) 00:11:19.199 fused_ordering(459) 00:11:19.199 fused_ordering(460) 00:11:19.199 fused_ordering(461) 00:11:19.199 fused_ordering(462) 00:11:19.199 fused_ordering(463) 00:11:19.199 fused_ordering(464) 00:11:19.199 fused_ordering(465) 00:11:19.199 fused_ordering(466) 00:11:19.199 fused_ordering(467) 00:11:19.199 fused_ordering(468) 00:11:19.199 fused_ordering(469) 00:11:19.199 fused_ordering(470) 00:11:19.199 fused_ordering(471) 00:11:19.199 fused_ordering(472) 00:11:19.199 fused_ordering(473) 00:11:19.199 fused_ordering(474) 00:11:19.199 fused_ordering(475) 00:11:19.199 fused_ordering(476) 00:11:19.199 fused_ordering(477) 00:11:19.199 fused_ordering(478) 00:11:19.199 fused_ordering(479) 00:11:19.199 fused_ordering(480) 00:11:19.199 fused_ordering(481) 00:11:19.199 fused_ordering(482) 00:11:19.199 fused_ordering(483) 00:11:19.199 fused_ordering(484) 00:11:19.199 fused_ordering(485) 00:11:19.199 fused_ordering(486) 00:11:19.199 fused_ordering(487) 00:11:19.199 fused_ordering(488) 00:11:19.199 fused_ordering(489) 00:11:19.199 fused_ordering(490) 00:11:19.199 fused_ordering(491) 00:11:19.199 fused_ordering(492) 00:11:19.199 fused_ordering(493) 00:11:19.199 fused_ordering(494) 00:11:19.199 fused_ordering(495) 00:11:19.199 fused_ordering(496) 00:11:19.199 fused_ordering(497) 00:11:19.199 fused_ordering(498) 00:11:19.199 fused_ordering(499) 00:11:19.199 fused_ordering(500) 00:11:19.199 fused_ordering(501) 00:11:19.199 fused_ordering(502) 00:11:19.199 fused_ordering(503) 00:11:19.199 fused_ordering(504) 00:11:19.199 fused_ordering(505) 00:11:19.199 fused_ordering(506) 00:11:19.199 fused_ordering(507) 00:11:19.199 fused_ordering(508) 00:11:19.199 fused_ordering(509) 00:11:19.199 fused_ordering(510) 00:11:19.199 fused_ordering(511) 00:11:19.199 fused_ordering(512) 00:11:19.199 fused_ordering(513) 00:11:19.199 fused_ordering(514) 00:11:19.199 fused_ordering(515) 00:11:19.199 fused_ordering(516) 00:11:19.199 fused_ordering(517) 00:11:19.199 fused_ordering(518) 00:11:19.199 fused_ordering(519) 00:11:19.199 fused_ordering(520) 00:11:19.199 fused_ordering(521) 00:11:19.199 fused_ordering(522) 00:11:19.199 fused_ordering(523) 00:11:19.199 fused_ordering(524) 00:11:19.199 fused_ordering(525) 00:11:19.199 fused_ordering(526) 00:11:19.199 fused_ordering(527) 00:11:19.199 fused_ordering(528) 00:11:19.199 fused_ordering(529) 00:11:19.199 fused_ordering(530) 00:11:19.199 fused_ordering(531) 00:11:19.199 fused_ordering(532) 00:11:19.199 fused_ordering(533) 00:11:19.199 fused_ordering(534) 00:11:19.199 fused_ordering(535) 00:11:19.199 fused_ordering(536) 00:11:19.199 fused_ordering(537) 00:11:19.199 fused_ordering(538) 00:11:19.199 fused_ordering(539) 00:11:19.199 fused_ordering(540) 00:11:19.199 fused_ordering(541) 00:11:19.199 fused_ordering(542) 00:11:19.199 fused_ordering(543) 00:11:19.199 fused_ordering(544) 00:11:19.199 fused_ordering(545) 00:11:19.199 fused_ordering(546) 00:11:19.199 fused_ordering(547) 00:11:19.199 fused_ordering(548) 00:11:19.199 fused_ordering(549) 00:11:19.199 fused_ordering(550) 00:11:19.199 fused_ordering(551) 00:11:19.199 fused_ordering(552) 00:11:19.199 fused_ordering(553) 00:11:19.199 fused_ordering(554) 00:11:19.199 fused_ordering(555) 00:11:19.199 fused_ordering(556) 00:11:19.199 fused_ordering(557) 00:11:19.199 fused_ordering(558) 00:11:19.199 fused_ordering(559) 00:11:19.199 fused_ordering(560) 00:11:19.199 fused_ordering(561) 00:11:19.199 fused_ordering(562) 00:11:19.199 fused_ordering(563) 00:11:19.199 fused_ordering(564) 00:11:19.199 fused_ordering(565) 00:11:19.199 fused_ordering(566) 00:11:19.199 fused_ordering(567) 00:11:19.199 fused_ordering(568) 00:11:19.199 fused_ordering(569) 00:11:19.199 fused_ordering(570) 00:11:19.199 fused_ordering(571) 00:11:19.199 fused_ordering(572) 00:11:19.199 fused_ordering(573) 00:11:19.199 fused_ordering(574) 00:11:19.199 fused_ordering(575) 00:11:19.199 fused_ordering(576) 00:11:19.199 fused_ordering(577) 00:11:19.199 fused_ordering(578) 00:11:19.199 fused_ordering(579) 00:11:19.199 fused_ordering(580) 00:11:19.199 fused_ordering(581) 00:11:19.199 fused_ordering(582) 00:11:19.199 fused_ordering(583) 00:11:19.199 fused_ordering(584) 00:11:19.199 fused_ordering(585) 00:11:19.199 fused_ordering(586) 00:11:19.199 fused_ordering(587) 00:11:19.199 fused_ordering(588) 00:11:19.199 fused_ordering(589) 00:11:19.199 fused_ordering(590) 00:11:19.199 fused_ordering(591) 00:11:19.199 fused_ordering(592) 00:11:19.199 fused_ordering(593) 00:11:19.199 fused_ordering(594) 00:11:19.199 fused_ordering(595) 00:11:19.199 fused_ordering(596) 00:11:19.199 fused_ordering(597) 00:11:19.199 fused_ordering(598) 00:11:19.199 fused_ordering(599) 00:11:19.199 fused_ordering(600) 00:11:19.199 fused_ordering(601) 00:11:19.199 fused_ordering(602) 00:11:19.199 fused_ordering(603) 00:11:19.199 fused_ordering(604) 00:11:19.199 fused_ordering(605) 00:11:19.199 fused_ordering(606) 00:11:19.199 fused_ordering(607) 00:11:19.199 fused_ordering(608) 00:11:19.199 fused_ordering(609) 00:11:19.199 fused_ordering(610) 00:11:19.199 fused_ordering(611) 00:11:19.199 fused_ordering(612) 00:11:19.199 fused_ordering(613) 00:11:19.199 fused_ordering(614) 00:11:19.199 fused_ordering(615) 00:11:19.770 fused_ordering(616) 00:11:19.770 fused_ordering(617) 00:11:19.770 fused_ordering(618) 00:11:19.770 fused_ordering(619) 00:11:19.770 fused_ordering(620) 00:11:19.770 fused_ordering(621) 00:11:19.770 fused_ordering(622) 00:11:19.770 fused_ordering(623) 00:11:19.770 fused_ordering(624) 00:11:19.770 fused_ordering(625) 00:11:19.770 fused_ordering(626) 00:11:19.770 fused_ordering(627) 00:11:19.770 fused_ordering(628) 00:11:19.770 fused_ordering(629) 00:11:19.770 fused_ordering(630) 00:11:19.770 fused_ordering(631) 00:11:19.770 fused_ordering(632) 00:11:19.770 fused_ordering(633) 00:11:19.770 fused_ordering(634) 00:11:19.770 fused_ordering(635) 00:11:19.770 fused_ordering(636) 00:11:19.770 fused_ordering(637) 00:11:19.770 fused_ordering(638) 00:11:19.770 fused_ordering(639) 00:11:19.770 fused_ordering(640) 00:11:19.770 fused_ordering(641) 00:11:19.770 fused_ordering(642) 00:11:19.770 fused_ordering(643) 00:11:19.770 fused_ordering(644) 00:11:19.770 fused_ordering(645) 00:11:19.770 fused_ordering(646) 00:11:19.770 fused_ordering(647) 00:11:19.770 fused_ordering(648) 00:11:19.770 fused_ordering(649) 00:11:19.770 fused_ordering(650) 00:11:19.770 fused_ordering(651) 00:11:19.770 fused_ordering(652) 00:11:19.770 fused_ordering(653) 00:11:19.770 fused_ordering(654) 00:11:19.770 fused_ordering(655) 00:11:19.770 fused_ordering(656) 00:11:19.770 fused_ordering(657) 00:11:19.770 fused_ordering(658) 00:11:19.770 fused_ordering(659) 00:11:19.770 fused_ordering(660) 00:11:19.770 fused_ordering(661) 00:11:19.770 fused_ordering(662) 00:11:19.770 fused_ordering(663) 00:11:19.770 fused_ordering(664) 00:11:19.770 fused_ordering(665) 00:11:19.770 fused_ordering(666) 00:11:19.770 fused_ordering(667) 00:11:19.770 fused_ordering(668) 00:11:19.770 fused_ordering(669) 00:11:19.770 fused_ordering(670) 00:11:19.770 fused_ordering(671) 00:11:19.770 fused_ordering(672) 00:11:19.770 fused_ordering(673) 00:11:19.770 fused_ordering(674) 00:11:19.770 fused_ordering(675) 00:11:19.770 fused_ordering(676) 00:11:19.770 fused_ordering(677) 00:11:19.771 fused_ordering(678) 00:11:19.771 fused_ordering(679) 00:11:19.771 fused_ordering(680) 00:11:19.771 fused_ordering(681) 00:11:19.771 fused_ordering(682) 00:11:19.771 fused_ordering(683) 00:11:19.771 fused_ordering(684) 00:11:19.771 fused_ordering(685) 00:11:19.771 fused_ordering(686) 00:11:19.771 fused_ordering(687) 00:11:19.771 fused_ordering(688) 00:11:19.771 fused_ordering(689) 00:11:19.771 fused_ordering(690) 00:11:19.771 fused_ordering(691) 00:11:19.771 fused_ordering(692) 00:11:19.771 fused_ordering(693) 00:11:19.771 fused_ordering(694) 00:11:19.771 fused_ordering(695) 00:11:19.771 fused_ordering(696) 00:11:19.771 fused_ordering(697) 00:11:19.771 fused_ordering(698) 00:11:19.771 fused_ordering(699) 00:11:19.771 fused_ordering(700) 00:11:19.771 fused_ordering(701) 00:11:19.771 fused_ordering(702) 00:11:19.771 fused_ordering(703) 00:11:19.771 fused_ordering(704) 00:11:19.771 fused_ordering(705) 00:11:19.771 fused_ordering(706) 00:11:19.771 fused_ordering(707) 00:11:19.771 fused_ordering(708) 00:11:19.771 fused_ordering(709) 00:11:19.771 fused_ordering(710) 00:11:19.771 fused_ordering(711) 00:11:19.771 fused_ordering(712) 00:11:19.771 fused_ordering(713) 00:11:19.771 fused_ordering(714) 00:11:19.771 fused_ordering(715) 00:11:19.771 fused_ordering(716) 00:11:19.771 fused_ordering(717) 00:11:19.771 fused_ordering(718) 00:11:19.771 fused_ordering(719) 00:11:19.771 fused_ordering(720) 00:11:19.771 fused_ordering(721) 00:11:19.771 fused_ordering(722) 00:11:19.771 fused_ordering(723) 00:11:19.771 fused_ordering(724) 00:11:19.771 fused_ordering(725) 00:11:19.771 fused_ordering(726) 00:11:19.771 fused_ordering(727) 00:11:19.771 fused_ordering(728) 00:11:19.771 fused_ordering(729) 00:11:19.771 fused_ordering(730) 00:11:19.771 fused_ordering(731) 00:11:19.771 fused_ordering(732) 00:11:19.771 fused_ordering(733) 00:11:19.771 fused_ordering(734) 00:11:19.771 fused_ordering(735) 00:11:19.771 fused_ordering(736) 00:11:19.771 fused_ordering(737) 00:11:19.771 fused_ordering(738) 00:11:19.771 fused_ordering(739) 00:11:19.771 fused_ordering(740) 00:11:19.771 fused_ordering(741) 00:11:19.771 fused_ordering(742) 00:11:19.771 fused_ordering(743) 00:11:19.771 fused_ordering(744) 00:11:19.771 fused_ordering(745) 00:11:19.771 fused_ordering(746) 00:11:19.771 fused_ordering(747) 00:11:19.771 fused_ordering(748) 00:11:19.771 fused_ordering(749) 00:11:19.771 fused_ordering(750) 00:11:19.771 fused_ordering(751) 00:11:19.771 fused_ordering(752) 00:11:19.771 fused_ordering(753) 00:11:19.771 fused_ordering(754) 00:11:19.771 fused_ordering(755) 00:11:19.771 fused_ordering(756) 00:11:19.771 fused_ordering(757) 00:11:19.771 fused_ordering(758) 00:11:19.771 fused_ordering(759) 00:11:19.771 fused_ordering(760) 00:11:19.771 fused_ordering(761) 00:11:19.771 fused_ordering(762) 00:11:19.771 fused_ordering(763) 00:11:19.771 fused_ordering(764) 00:11:19.771 fused_ordering(765) 00:11:19.771 fused_ordering(766) 00:11:19.771 fused_ordering(767) 00:11:19.771 fused_ordering(768) 00:11:19.771 fused_ordering(769) 00:11:19.771 fused_ordering(770) 00:11:19.771 fused_ordering(771) 00:11:19.771 fused_ordering(772) 00:11:19.771 fused_ordering(773) 00:11:19.771 fused_ordering(774) 00:11:19.771 fused_ordering(775) 00:11:19.771 fused_ordering(776) 00:11:19.771 fused_ordering(777) 00:11:19.771 fused_ordering(778) 00:11:19.771 fused_ordering(779) 00:11:19.771 fused_ordering(780) 00:11:19.771 fused_ordering(781) 00:11:19.771 fused_ordering(782) 00:11:19.771 fused_ordering(783) 00:11:19.771 fused_ordering(784) 00:11:19.771 fused_ordering(785) 00:11:19.771 fused_ordering(786) 00:11:19.771 fused_ordering(787) 00:11:19.771 fused_ordering(788) 00:11:19.771 fused_ordering(789) 00:11:19.771 fused_ordering(790) 00:11:19.771 fused_ordering(791) 00:11:19.771 fused_ordering(792) 00:11:19.771 fused_ordering(793) 00:11:19.771 fused_ordering(794) 00:11:19.771 fused_ordering(795) 00:11:19.771 fused_ordering(796) 00:11:19.771 fused_ordering(797) 00:11:19.771 fused_ordering(798) 00:11:19.771 fused_ordering(799) 00:11:19.771 fused_ordering(800) 00:11:19.771 fused_ordering(801) 00:11:19.771 fused_ordering(802) 00:11:19.771 fused_ordering(803) 00:11:19.771 fused_ordering(804) 00:11:19.771 fused_ordering(805) 00:11:19.771 fused_ordering(806) 00:11:19.771 fused_ordering(807) 00:11:19.771 fused_ordering(808) 00:11:19.771 fused_ordering(809) 00:11:19.771 fused_ordering(810) 00:11:19.771 fused_ordering(811) 00:11:19.771 fused_ordering(812) 00:11:19.771 fused_ordering(813) 00:11:19.771 fused_ordering(814) 00:11:19.771 fused_ordering(815) 00:11:19.771 fused_ordering(816) 00:11:19.771 fused_ordering(817) 00:11:19.771 fused_ordering(818) 00:11:19.771 fused_ordering(819) 00:11:19.771 fused_ordering(820) 00:11:20.342 fused_ordering(821) 00:11:20.342 fused_ordering(822) 00:11:20.342 fused_ordering(823) 00:11:20.342 fused_ordering(824) 00:11:20.342 fused_ordering(825) 00:11:20.342 fused_ordering(826) 00:11:20.342 fused_ordering(827) 00:11:20.342 fused_ordering(828) 00:11:20.342 fused_ordering(829) 00:11:20.342 fused_ordering(830) 00:11:20.342 fused_ordering(831) 00:11:20.342 fused_ordering(832) 00:11:20.342 fused_ordering(833) 00:11:20.342 fused_ordering(834) 00:11:20.342 fused_ordering(835) 00:11:20.342 fused_ordering(836) 00:11:20.342 fused_ordering(837) 00:11:20.342 fused_ordering(838) 00:11:20.342 fused_ordering(839) 00:11:20.342 fused_ordering(840) 00:11:20.342 fused_ordering(841) 00:11:20.342 fused_ordering(842) 00:11:20.342 fused_ordering(843) 00:11:20.342 fused_ordering(844) 00:11:20.342 fused_ordering(845) 00:11:20.342 fused_ordering(846) 00:11:20.342 fused_ordering(847) 00:11:20.342 fused_ordering(848) 00:11:20.342 fused_ordering(849) 00:11:20.342 fused_ordering(850) 00:11:20.342 fused_ordering(851) 00:11:20.342 fused_ordering(852) 00:11:20.342 fused_ordering(853) 00:11:20.342 fused_ordering(854) 00:11:20.342 fused_ordering(855) 00:11:20.342 fused_ordering(856) 00:11:20.342 fused_ordering(857) 00:11:20.342 fused_ordering(858) 00:11:20.342 fused_ordering(859) 00:11:20.342 fused_ordering(860) 00:11:20.342 fused_ordering(861) 00:11:20.342 fused_ordering(862) 00:11:20.342 fused_ordering(863) 00:11:20.342 fused_ordering(864) 00:11:20.342 fused_ordering(865) 00:11:20.342 fused_ordering(866) 00:11:20.342 fused_ordering(867) 00:11:20.342 fused_ordering(868) 00:11:20.342 fused_ordering(869) 00:11:20.342 fused_ordering(870) 00:11:20.342 fused_ordering(871) 00:11:20.342 fused_ordering(872) 00:11:20.342 fused_ordering(873) 00:11:20.342 fused_ordering(874) 00:11:20.342 fused_ordering(875) 00:11:20.342 fused_ordering(876) 00:11:20.342 fused_ordering(877) 00:11:20.342 fused_ordering(878) 00:11:20.342 fused_ordering(879) 00:11:20.342 fused_ordering(880) 00:11:20.342 fused_ordering(881) 00:11:20.342 fused_ordering(882) 00:11:20.342 fused_ordering(883) 00:11:20.342 fused_ordering(884) 00:11:20.342 fused_ordering(885) 00:11:20.342 fused_ordering(886) 00:11:20.342 fused_ordering(887) 00:11:20.342 fused_ordering(888) 00:11:20.342 fused_ordering(889) 00:11:20.342 fused_ordering(890) 00:11:20.342 fused_ordering(891) 00:11:20.342 fused_ordering(892) 00:11:20.342 fused_ordering(893) 00:11:20.342 fused_ordering(894) 00:11:20.342 fused_ordering(895) 00:11:20.342 fused_ordering(896) 00:11:20.342 fused_ordering(897) 00:11:20.342 fused_ordering(898) 00:11:20.342 fused_ordering(899) 00:11:20.342 fused_ordering(900) 00:11:20.342 fused_ordering(901) 00:11:20.342 fused_ordering(902) 00:11:20.342 fused_ordering(903) 00:11:20.342 fused_ordering(904) 00:11:20.342 fused_ordering(905) 00:11:20.342 fused_ordering(906) 00:11:20.342 fused_ordering(907) 00:11:20.342 fused_ordering(908) 00:11:20.342 fused_ordering(909) 00:11:20.342 fused_ordering(910) 00:11:20.342 fused_ordering(911) 00:11:20.342 fused_ordering(912) 00:11:20.342 fused_ordering(913) 00:11:20.342 fused_ordering(914) 00:11:20.342 fused_ordering(915) 00:11:20.342 fused_ordering(916) 00:11:20.342 fused_ordering(917) 00:11:20.342 fused_ordering(918) 00:11:20.342 fused_ordering(919) 00:11:20.342 fused_ordering(920) 00:11:20.342 fused_ordering(921) 00:11:20.342 fused_ordering(922) 00:11:20.342 fused_ordering(923) 00:11:20.342 fused_ordering(924) 00:11:20.342 fused_ordering(925) 00:11:20.342 fused_ordering(926) 00:11:20.342 fused_ordering(927) 00:11:20.342 fused_ordering(928) 00:11:20.342 fused_ordering(929) 00:11:20.342 fused_ordering(930) 00:11:20.342 fused_ordering(931) 00:11:20.342 fused_ordering(932) 00:11:20.342 fused_ordering(933) 00:11:20.342 fused_ordering(934) 00:11:20.342 fused_ordering(935) 00:11:20.343 fused_ordering(936) 00:11:20.343 fused_ordering(937) 00:11:20.343 fused_ordering(938) 00:11:20.343 fused_ordering(939) 00:11:20.343 fused_ordering(940) 00:11:20.343 fused_ordering(941) 00:11:20.343 fused_ordering(942) 00:11:20.343 fused_ordering(943) 00:11:20.343 fused_ordering(944) 00:11:20.343 fused_ordering(945) 00:11:20.343 fused_ordering(946) 00:11:20.343 fused_ordering(947) 00:11:20.343 fused_ordering(948) 00:11:20.343 fused_ordering(949) 00:11:20.343 fused_ordering(950) 00:11:20.343 fused_ordering(951) 00:11:20.343 fused_ordering(952) 00:11:20.343 fused_ordering(953) 00:11:20.343 fused_ordering(954) 00:11:20.343 fused_ordering(955) 00:11:20.343 fused_ordering(956) 00:11:20.343 fused_ordering(957) 00:11:20.343 fused_ordering(958) 00:11:20.343 fused_ordering(959) 00:11:20.343 fused_ordering(960) 00:11:20.343 fused_ordering(961) 00:11:20.343 fused_ordering(962) 00:11:20.343 fused_ordering(963) 00:11:20.343 fused_ordering(964) 00:11:20.343 fused_ordering(965) 00:11:20.343 fused_ordering(966) 00:11:20.343 fused_ordering(967) 00:11:20.343 fused_ordering(968) 00:11:20.343 fused_ordering(969) 00:11:20.343 fused_ordering(970) 00:11:20.343 fused_ordering(971) 00:11:20.343 fused_ordering(972) 00:11:20.343 fused_ordering(973) 00:11:20.343 fused_ordering(974) 00:11:20.343 fused_ordering(975) 00:11:20.343 fused_ordering(976) 00:11:20.343 fused_ordering(977) 00:11:20.343 fused_ordering(978) 00:11:20.343 fused_ordering(979) 00:11:20.343 fused_ordering(980) 00:11:20.343 fused_ordering(981) 00:11:20.343 fused_ordering(982) 00:11:20.343 fused_ordering(983) 00:11:20.343 fused_ordering(984) 00:11:20.343 fused_ordering(985) 00:11:20.343 fused_ordering(986) 00:11:20.343 fused_ordering(987) 00:11:20.343 fused_ordering(988) 00:11:20.343 fused_ordering(989) 00:11:20.343 fused_ordering(990) 00:11:20.343 fused_ordering(991) 00:11:20.343 fused_ordering(992) 00:11:20.343 fused_ordering(993) 00:11:20.343 fused_ordering(994) 00:11:20.343 fused_ordering(995) 00:11:20.343 fused_ordering(996) 00:11:20.343 fused_ordering(997) 00:11:20.343 fused_ordering(998) 00:11:20.343 fused_ordering(999) 00:11:20.343 fused_ordering(1000) 00:11:20.343 fused_ordering(1001) 00:11:20.343 fused_ordering(1002) 00:11:20.343 fused_ordering(1003) 00:11:20.343 fused_ordering(1004) 00:11:20.343 fused_ordering(1005) 00:11:20.343 fused_ordering(1006) 00:11:20.343 fused_ordering(1007) 00:11:20.343 fused_ordering(1008) 00:11:20.343 fused_ordering(1009) 00:11:20.343 fused_ordering(1010) 00:11:20.343 fused_ordering(1011) 00:11:20.343 fused_ordering(1012) 00:11:20.343 fused_ordering(1013) 00:11:20.343 fused_ordering(1014) 00:11:20.343 fused_ordering(1015) 00:11:20.343 fused_ordering(1016) 00:11:20.343 fused_ordering(1017) 00:11:20.343 fused_ordering(1018) 00:11:20.343 fused_ordering(1019) 00:11:20.343 fused_ordering(1020) 00:11:20.343 fused_ordering(1021) 00:11:20.343 fused_ordering(1022) 00:11:20.343 fused_ordering(1023) 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.343 rmmod nvme_tcp 00:11:20.343 rmmod nvme_fabrics 00:11:20.343 rmmod nvme_keyring 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 565829 ']' 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 565829 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 565829 ']' 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 565829 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.343 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 565829 00:11:20.604 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:20.604 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:20.604 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 565829' 00:11:20.604 killing process with pid 565829 00:11:20.604 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 565829 00:11:20.604 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 565829 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.605 12:56:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.160 12:56:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.160 00:11:23.160 real 0m13.805s 00:11:23.160 user 0m7.000s 00:11:23.160 sys 0m7.435s 00:11:23.160 12:56:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.160 12:56:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:23.160 ************************************ 00:11:23.160 END TEST nvmf_fused_ordering 00:11:23.160 ************************************ 00:11:23.160 12:56:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:23.160 12:56:44 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:23.160 12:56:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.160 12:56:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.160 12:56:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.160 ************************************ 00:11:23.160 START TEST nvmf_delete_subsystem 00:11:23.160 ************************************ 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:23.160 * Looking for test storage... 00:11:23.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.160 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.161 12:56:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:31.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:31.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.299 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:31.300 Found net devices under 0000:31:00.0: cvl_0_0 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:31.300 Found net devices under 0000:31:00.1: cvl_0_1 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:11:31.300 00:11:31.300 --- 10.0.0.2 ping statistics --- 00:11:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.300 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:31.300 00:11:31.300 --- 10.0.0.1 ping statistics --- 00:11:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.300 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=571091 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 571091 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 571091 ']' 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.300 12:56:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.300 [2024-07-15 12:56:52.649403] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:11:31.300 [2024-07-15 12:56:52.649473] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.300 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.300 [2024-07-15 12:56:52.731881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:31.300 [2024-07-15 12:56:52.805280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.300 [2024-07-15 12:56:52.805322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.300 [2024-07-15 12:56:52.805331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.300 [2024-07-15 12:56:52.805337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.300 [2024-07-15 12:56:52.805343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.300 [2024-07-15 12:56:52.805426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.300 [2024-07-15 12:56:52.805429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 [2024-07-15 12:56:53.456825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 [2024-07-15 12:56:53.481005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 NULL1 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 Delay0 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=571424 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:31.874 12:56:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:31.874 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.874 [2024-07-15 12:56:53.577688] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:33.801 12:56:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.801 12:56:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.801 12:56:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 starting I/O failed: -6 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.061 Write completed with error (sct=0, sc=8) 00:11:34.061 Write completed with error (sct=0, sc=8) 00:11:34.061 starting I/O failed: -6 00:11:34.061 Write completed with error (sct=0, sc=8) 00:11:34.061 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 [2024-07-15 12:56:55.702522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ca650 is same with the state(5) to be set 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 [2024-07-15 12:56:55.702813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7e90 is same with the state(5) to be set 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 starting I/O failed: -6 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 [2024-07-15 12:56:55.706564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4d8000c00 is same with the state(5) to be set 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Read completed with error (sct=0, sc=8) 00:11:34.062 Write completed with error (sct=0, sc=8) 00:11:35.003 [2024-07-15 12:56:56.677451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a6500 is same with the state(5) to be set 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 [2024-07-15 12:56:56.706248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c7cb0 is same with the state(5) to be set 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 [2024-07-15 12:56:56.706338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c6d00 is same with the state(5) to be set 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 [2024-07-15 12:56:56.708690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4d800cfe0 is same with the state(5) to be set 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Write completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 Read completed with error (sct=0, sc=8) 00:11:35.003 [2024-07-15 12:56:56.708783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc4d800d740 is same with the state(5) to be set 00:11:35.003 Initializing NVMe Controllers 00:11:35.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.003 Controller IO queue size 128, less than required. 00:11:35.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:35.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:35.003 Initialization complete. Launching workers. 00:11:35.003 ======================================================== 00:11:35.003 Latency(us) 00:11:35.003 Device Information : IOPS MiB/s Average min max 00:11:35.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.81 0.08 897067.63 311.38 1007682.26 00:11:35.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.85 0.08 976402.34 286.76 2002171.52 00:11:35.003 ======================================================== 00:11:35.003 Total : 328.67 0.16 935653.15 286.76 2002171.52 00:11:35.003 00:11:35.003 [2024-07-15 12:56:56.709303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6500 (9): Bad file descriptor 00:11:35.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:35.003 12:56:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.003 12:56:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:35.003 12:56:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 571424 00:11:35.003 12:56:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:35.573 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 571424 00:11:35.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (571424) - No such process 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 571424 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 571424 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 571424 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.574 [2024-07-15 12:56:57.241950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=572099 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:35.574 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.574 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.574 [2024-07-15 12:56:57.308425] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:36.145 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.145 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:36.145 12:56:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.715 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.715 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:36.715 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:36.976 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.976 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:36.976 12:56:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:37.547 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:37.547 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:37.547 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.119 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.119 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:38.119 12:56:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.690 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.691 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:38.691 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.691 Initializing NVMe Controllers 00:11:38.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.691 Controller IO queue size 128, less than required. 00:11:38.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:38.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:38.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:38.691 Initialization complete. Launching workers. 00:11:38.691 ======================================================== 00:11:38.691 Latency(us) 00:11:38.691 Device Information : IOPS MiB/s Average min max 00:11:38.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002360.74 1000119.89 1007691.08 00:11:38.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003013.95 1000161.65 1009584.08 00:11:38.691 ======================================================== 00:11:38.691 Total : 256.00 0.12 1002687.34 1000119.89 1009584.08 00:11:38.691 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 572099 00:11:39.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (572099) - No such process 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 572099 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.263 rmmod nvme_tcp 00:11:39.263 rmmod nvme_fabrics 00:11:39.263 rmmod nvme_keyring 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 571091 ']' 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 571091 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 571091 ']' 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 571091 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 571091 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 571091' 00:11:39.263 killing process with pid 571091 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 571091 00:11:39.263 12:57:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 571091 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.263 12:57:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.811 12:57:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.811 00:11:41.811 real 0m18.663s 00:11:41.811 user 0m30.869s 00:11:41.811 sys 0m6.676s 00:11:41.811 12:57:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.811 12:57:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.811 ************************************ 00:11:41.811 END TEST nvmf_delete_subsystem 00:11:41.811 ************************************ 00:11:41.811 12:57:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:41.811 12:57:03 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:41.811 12:57:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.811 12:57:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.811 12:57:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:41.811 ************************************ 00:11:41.811 START TEST nvmf_ns_masking 00:11:41.811 ************************************ 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:41.811 * Looking for test storage... 00:11:41.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6bfd423e-1f18-45aa-ac09-07edc8d22b45 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0d807a46-9033-41e5-a357-2294c23673af 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b2b23757-41f6-428b-8254-e854e07dac8d 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.811 12:57:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:50.011 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:50.011 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:50.011 Found net devices under 0000:31:00.0: cvl_0_0 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:50.011 Found net devices under 0000:31:00.1: cvl_0_1 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.011 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:11:50.012 00:11:50.012 --- 10.0.0.2 ping statistics --- 00:11:50.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.012 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:11:50.012 00:11:50.012 --- 10.0.0.1 ping statistics --- 00:11:50.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.012 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=578015 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 578015 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 578015 ']' 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.012 12:57:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.012 [2024-07-15 12:57:11.524654] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:11:50.012 [2024-07-15 12:57:11.524711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.012 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.012 [2024-07-15 12:57:11.598543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.012 [2024-07-15 12:57:11.661881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.012 [2024-07-15 12:57:11.661917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.012 [2024-07-15 12:57:11.661925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.012 [2024-07-15 12:57:11.661932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.012 [2024-07-15 12:57:11.661937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.012 [2024-07-15 12:57:11.661954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.583 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:50.843 [2024-07-15 12:57:12.444652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.843 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:50.843 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:50.843 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:50.843 Malloc1 00:11:50.843 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:51.103 Malloc2 00:11:51.103 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:51.363 12:57:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:51.363 12:57:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.622 [2024-07-15 12:57:13.291267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2b23757-41f6-428b-8254-e854e07dac8d -a 10.0.0.2 -s 4420 -i 4 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:51.622 12:57:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.165 [ 0]:0x1 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e27f6a86523b4a9b8b709f2b702f6885 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e27f6a86523b4a9b8b709f2b702f6885 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.165 [ 0]:0x1 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e27f6a86523b4a9b8b709f2b702f6885 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e27f6a86523b4a9b8b709f2b702f6885 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.165 [ 1]:0x2 00:11:54.165 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.166 12:57:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.426 12:57:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2b23757-41f6-428b-8254-e854e07dac8d -a 10.0.0.2 -s 4420 -i 4 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:54.687 12:57:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.230 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.231 [ 0]:0x2 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.231 [ 0]:0x1 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e27f6a86523b4a9b8b709f2b702f6885 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e27f6a86523b4a9b8b709f2b702f6885 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.231 [ 1]:0x2 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.231 12:57:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.231 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:11:57.231 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.231 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:57.491 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:57.491 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:57.491 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:57.491 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.492 [ 0]:0x2 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:57.492 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.752 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:57.752 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:57.752 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2b23757-41f6-428b-8254-e854e07dac8d -a 10.0.0.2 -s 4420 -i 4 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:58.012 12:57:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:59.924 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.184 [ 0]:0x1 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e27f6a86523b4a9b8b709f2b702f6885 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e27f6a86523b4a9b8b709f2b702f6885 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.184 12:57:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.184 [ 1]:0x2 00:12:00.185 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.185 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.446 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.446 [ 0]:0x2 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:00.709 [2024-07-15 12:57:22.465571] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:00.709 request: 00:12:00.709 { 00:12:00.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.709 "nsid": 2, 00:12:00.709 "host": "nqn.2016-06.io.spdk:host1", 00:12:00.709 "method": "nvmf_ns_remove_host", 00:12:00.709 "req_id": 1 00:12:00.709 } 00:12:00.709 Got JSON-RPC error response 00:12:00.709 response: 00:12:00.709 { 00:12:00.709 "code": -32602, 00:12:00.709 "message": "Invalid parameters" 00:12:00.709 } 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.709 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.971 [ 0]:0x2 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=515b9f16dcef4efea788487e834fa33c 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 515b9f16dcef4efea788487e834fa33c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=580511 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 580511 /var/tmp/host.sock 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 580511 ']' 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:00.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.971 12:57:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.971 [2024-07-15 12:57:22.701335] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:12:00.971 [2024-07-15 12:57:22.701387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580511 ] 00:12:00.971 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.971 [2024-07-15 12:57:22.785019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.233 [2024-07-15 12:57:22.849302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.823 12:57:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.823 12:57:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:01.823 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.823 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:02.084 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6bfd423e-1f18-45aa-ac09-07edc8d22b45 00:12:02.084 12:57:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:02.084 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6BFD423E1F1845AAAC0907EDC8D22B45 -i 00:12:02.346 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0d807a46-9033-41e5-a357-2294c23673af 00:12:02.346 12:57:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:02.346 12:57:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0D807A46903341E5A3572294C23673AF -i 00:12:02.346 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.607 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:02.607 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:02.607 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:03.178 nvme0n1 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:03.178 nvme1n2 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:03.178 12:57:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:03.439 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:03.439 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:03.439 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:03.439 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6bfd423e-1f18-45aa-ac09-07edc8d22b45 == \6\b\f\d\4\2\3\e\-\1\f\1\8\-\4\5\a\a\-\a\c\0\9\-\0\7\e\d\c\8\d\2\2\b\4\5 ]] 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0d807a46-9033-41e5-a357-2294c23673af == \0\d\8\0\7\a\4\6\-\9\0\3\3\-\4\1\e\5\-\a\3\5\7\-\2\2\9\4\c\2\3\6\7\3\a\f ]] 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 580511 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 580511 ']' 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 580511 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 580511 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 580511' 00:12:03.701 killing process with pid 580511 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 580511 00:12:03.701 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 580511 00:12:03.960 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.220 rmmod nvme_tcp 00:12:04.220 rmmod nvme_fabrics 00:12:04.220 rmmod nvme_keyring 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 578015 ']' 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 578015 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 578015 ']' 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 578015 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.220 12:57:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 578015 00:12:04.220 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:04.220 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:04.220 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 578015' 00:12:04.220 killing process with pid 578015 00:12:04.220 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 578015 00:12:04.220 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 578015 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.480 12:57:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.505 12:57:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.505 00:12:06.505 real 0m25.053s 00:12:06.505 user 0m24.075s 00:12:06.505 sys 0m7.943s 00:12:06.505 12:57:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.505 12:57:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 ************************************ 00:12:06.505 END TEST nvmf_ns_masking 00:12:06.505 ************************************ 00:12:06.505 12:57:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:06.505 12:57:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:06.505 12:57:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:06.505 12:57:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.505 12:57:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.505 12:57:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 ************************************ 00:12:06.505 START TEST nvmf_nvme_cli 00:12:06.505 ************************************ 00:12:06.505 12:57:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:06.765 * Looking for test storage... 00:12:06.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.765 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.766 12:57:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:14.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:14.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:14.897 Found net devices under 0000:31:00.0: cvl_0_0 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:14.897 Found net devices under 0000:31:00.1: cvl_0_1 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:12:14.897 00:12:14.897 --- 10.0.0.2 ping statistics --- 00:12:14.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.897 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:12:14.897 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:14.898 00:12:14.898 --- 10.0.0.1 ping statistics --- 00:12:14.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.898 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=585850 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 585850 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 585850 ']' 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.898 12:57:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.898 [2024-07-15 12:57:36.440369] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:12:14.898 [2024-07-15 12:57:36.440424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.898 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.898 [2024-07-15 12:57:36.520191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.898 [2024-07-15 12:57:36.592742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.898 [2024-07-15 12:57:36.592778] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.898 [2024-07-15 12:57:36.592786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.898 [2024-07-15 12:57:36.592792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.898 [2024-07-15 12:57:36.592798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.898 [2024-07-15 12:57:36.592840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.898 [2024-07-15 12:57:36.592969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.898 [2024-07-15 12:57:36.593125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.898 [2024-07-15 12:57:36.593126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.466 [2024-07-15 12:57:37.269798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.466 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.726 Malloc0 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.726 Malloc1 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.726 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.727 [2024-07-15 12:57:37.359541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:15.727 00:12:15.727 Discovery Log Number of Records 2, Generation counter 2 00:12:15.727 =====Discovery Log Entry 0====== 00:12:15.727 trtype: tcp 00:12:15.727 adrfam: ipv4 00:12:15.727 subtype: current discovery subsystem 00:12:15.727 treq: not required 00:12:15.727 portid: 0 00:12:15.727 trsvcid: 4420 00:12:15.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.727 traddr: 10.0.0.2 00:12:15.727 eflags: explicit discovery connections, duplicate discovery information 00:12:15.727 sectype: none 00:12:15.727 =====Discovery Log Entry 1====== 00:12:15.727 trtype: tcp 00:12:15.727 adrfam: ipv4 00:12:15.727 subtype: nvme subsystem 00:12:15.727 treq: not required 00:12:15.727 portid: 0 00:12:15.727 trsvcid: 4420 00:12:15.727 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.727 traddr: 10.0.0.2 00:12:15.727 eflags: none 00:12:15.727 sectype: none 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:15.727 12:57:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:17.639 12:57:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:19.551 /dev/nvme0n1 ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.551 rmmod nvme_tcp 00:12:19.551 rmmod nvme_fabrics 00:12:19.551 rmmod nvme_keyring 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 585850 ']' 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 585850 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 585850 ']' 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 585850 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.551 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 585850 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 585850' 00:12:19.812 killing process with pid 585850 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 585850 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 585850 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.812 12:57:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.356 12:57:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:22.356 00:12:22.356 real 0m15.327s 00:12:22.356 user 0m22.020s 00:12:22.356 sys 0m6.467s 00:12:22.356 12:57:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.356 12:57:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.356 ************************************ 00:12:22.356 END TEST nvmf_nvme_cli 00:12:22.356 ************************************ 00:12:22.356 12:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:22.356 12:57:43 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:22.356 12:57:43 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:22.356 12:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:22.356 12:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.356 12:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:22.356 ************************************ 00:12:22.356 START TEST nvmf_vfio_user 00:12:22.356 ************************************ 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:22.356 * Looking for test storage... 00:12:22.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.356 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=587383 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 587383' 00:12:22.357 Process pid: 587383 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 587383 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 587383 ']' 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.357 12:57:43 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:22.357 [2024-07-15 12:57:43.905244] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:12:22.357 [2024-07-15 12:57:43.905313] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.357 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.357 [2024-07-15 12:57:43.980029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.357 [2024-07-15 12:57:44.053959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.357 [2024-07-15 12:57:44.054002] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.357 [2024-07-15 12:57:44.054010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.357 [2024-07-15 12:57:44.054017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.357 [2024-07-15 12:57:44.054022] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.357 [2024-07-15 12:57:44.054164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.357 [2024-07-15 12:57:44.054273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.357 [2024-07-15 12:57:44.054371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.357 [2024-07-15 12:57:44.054372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.927 12:57:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.927 12:57:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:22.927 12:57:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:23.868 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:24.128 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:24.128 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:24.128 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.128 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:24.128 12:57:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:24.426 Malloc1 00:12:24.426 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:24.426 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:24.685 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.945 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.945 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.945 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.945 Malloc2 00:12:24.945 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:25.204 12:57:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:25.464 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:25.464 [2024-07-15 12:57:47.235877] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:12:25.464 [2024-07-15 12:57:47.235920] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588070 ] 00:12:25.464 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.464 [2024-07-15 12:57:47.266865] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:25.464 [2024-07-15 12:57:47.275537] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.464 [2024-07-15 12:57:47.275555] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f975fbc0000 00:12:25.464 [2024-07-15 12:57:47.276535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.464 [2024-07-15 12:57:47.277532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.464 [2024-07-15 12:57:47.278547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.279552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.280561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.281567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.282576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.283577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.465 [2024-07-15 12:57:47.284597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.465 [2024-07-15 12:57:47.284606] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f975fbb5000 00:12:25.465 [2024-07-15 12:57:47.285932] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.728 [2024-07-15 12:57:47.306383] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:25.728 [2024-07-15 12:57:47.306411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:25.728 [2024-07-15 12:57:47.308734] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.728 [2024-07-15 12:57:47.308781] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:25.728 [2024-07-15 12:57:47.308870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:25.728 [2024-07-15 12:57:47.308888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:25.728 [2024-07-15 12:57:47.308894] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:25.728 [2024-07-15 12:57:47.309732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:25.728 [2024-07-15 12:57:47.309742] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:25.728 [2024-07-15 12:57:47.309749] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:25.728 [2024-07-15 12:57:47.313236] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.728 [2024-07-15 12:57:47.313245] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:25.728 [2024-07-15 12:57:47.313253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.313760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:25.728 [2024-07-15 12:57:47.313767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.314773] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:25.728 [2024-07-15 12:57:47.314782] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:25.728 [2024-07-15 12:57:47.314787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.314796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.314902] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:25.728 [2024-07-15 12:57:47.314907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.314912] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:25.728 [2024-07-15 12:57:47.315775] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:25.728 [2024-07-15 12:57:47.316782] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:25.728 [2024-07-15 12:57:47.317790] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.728 [2024-07-15 12:57:47.318781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.728 [2024-07-15 12:57:47.318845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:25.728 [2024-07-15 12:57:47.319793] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:25.728 [2024-07-15 12:57:47.319801] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:25.728 [2024-07-15 12:57:47.319806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.319827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:25.728 [2024-07-15 12:57:47.319834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.319850] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.728 [2024-07-15 12:57:47.319855] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.728 [2024-07-15 12:57:47.319869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.728 [2024-07-15 12:57:47.319900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:25.728 [2024-07-15 12:57:47.319909] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:25.728 [2024-07-15 12:57:47.319916] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:25.728 [2024-07-15 12:57:47.319920] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:25.728 [2024-07-15 12:57:47.319925] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:25.728 [2024-07-15 12:57:47.319930] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:25.728 [2024-07-15 12:57:47.319934] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:25.728 [2024-07-15 12:57:47.319939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.319947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.319959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:25.728 [2024-07-15 12:57:47.319971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:25.728 [2024-07-15 12:57:47.319983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.728 [2024-07-15 12:57:47.319992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.728 [2024-07-15 12:57:47.320000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.728 [2024-07-15 12:57:47.320009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.728 [2024-07-15 12:57:47.320014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320033] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:25.728 [2024-07-15 12:57:47.320042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:25.728 [2024-07-15 12:57:47.320048] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:25.728 [2024-07-15 12:57:47.320053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.728 [2024-07-15 12:57:47.320086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:25.728 [2024-07-15 12:57:47.320145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:25.728 [2024-07-15 12:57:47.320161] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:25.728 [2024-07-15 12:57:47.320165] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:25.728 [2024-07-15 12:57:47.320171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320189] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:25.729 [2024-07-15 12:57:47.320201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320217] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.729 [2024-07-15 12:57:47.320221] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.729 [2024-07-15 12:57:47.320227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320274] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.729 [2024-07-15 12:57:47.320278] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.729 [2024-07-15 12:57:47.320284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320335] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:25.729 [2024-07-15 12:57:47.320339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:25.729 [2024-07-15 12:57:47.320344] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:25.729 [2024-07-15 12:57:47.320362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320447] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:25.729 [2024-07-15 12:57:47.320452] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:25.729 [2024-07-15 12:57:47.320455] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:25.729 [2024-07-15 12:57:47.320459] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:25.729 [2024-07-15 12:57:47.320465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:25.729 [2024-07-15 12:57:47.320473] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:25.729 [2024-07-15 12:57:47.320477] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:25.729 [2024-07-15 12:57:47.320483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320490] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:25.729 [2024-07-15 12:57:47.320494] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.729 [2024-07-15 12:57:47.320500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320508] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:25.729 [2024-07-15 12:57:47.320512] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:25.729 [2024-07-15 12:57:47.320517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:25.729 [2024-07-15 12:57:47.320524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:25.729 [2024-07-15 12:57:47.320554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:25.729 ===================================================== 00:12:25.729 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.729 ===================================================== 00:12:25.729 Controller Capabilities/Features 00:12:25.729 ================================ 00:12:25.729 Vendor ID: 4e58 00:12:25.729 Subsystem Vendor ID: 4e58 00:12:25.729 Serial Number: SPDK1 00:12:25.729 Model Number: SPDK bdev Controller 00:12:25.729 Firmware Version: 24.09 00:12:25.729 Recommended Arb Burst: 6 00:12:25.729 IEEE OUI Identifier: 8d 6b 50 00:12:25.729 Multi-path I/O 00:12:25.729 May have multiple subsystem ports: Yes 00:12:25.729 May have multiple controllers: Yes 00:12:25.729 Associated with SR-IOV VF: No 00:12:25.729 Max Data Transfer Size: 131072 00:12:25.729 Max Number of Namespaces: 32 00:12:25.729 Max Number of I/O Queues: 127 00:12:25.729 NVMe Specification Version (VS): 1.3 00:12:25.729 NVMe Specification Version (Identify): 1.3 00:12:25.729 Maximum Queue Entries: 256 00:12:25.729 Contiguous Queues Required: Yes 00:12:25.729 Arbitration Mechanisms Supported 00:12:25.729 Weighted Round Robin: Not Supported 00:12:25.729 Vendor Specific: Not Supported 00:12:25.729 Reset Timeout: 15000 ms 00:12:25.729 Doorbell Stride: 4 bytes 00:12:25.729 NVM Subsystem Reset: Not Supported 00:12:25.729 Command Sets Supported 00:12:25.729 NVM Command Set: Supported 00:12:25.729 Boot Partition: Not Supported 00:12:25.729 Memory Page Size Minimum: 4096 bytes 00:12:25.729 Memory Page Size Maximum: 4096 bytes 00:12:25.729 Persistent Memory Region: Not Supported 00:12:25.729 Optional Asynchronous Events Supported 00:12:25.729 Namespace Attribute Notices: Supported 00:12:25.729 Firmware Activation Notices: Not Supported 00:12:25.729 ANA Change Notices: Not Supported 00:12:25.729 PLE Aggregate Log Change Notices: Not Supported 00:12:25.729 LBA Status Info Alert Notices: Not Supported 00:12:25.729 EGE Aggregate Log Change Notices: Not Supported 00:12:25.729 Normal NVM Subsystem Shutdown event: Not Supported 00:12:25.729 Zone Descriptor Change Notices: Not Supported 00:12:25.729 Discovery Log Change Notices: Not Supported 00:12:25.729 Controller Attributes 00:12:25.729 128-bit Host Identifier: Supported 00:12:25.729 Non-Operational Permissive Mode: Not Supported 00:12:25.729 NVM Sets: Not Supported 00:12:25.729 Read Recovery Levels: Not Supported 00:12:25.729 Endurance Groups: Not Supported 00:12:25.729 Predictable Latency Mode: Not Supported 00:12:25.729 Traffic Based Keep ALive: Not Supported 00:12:25.729 Namespace Granularity: Not Supported 00:12:25.729 SQ Associations: Not Supported 00:12:25.729 UUID List: Not Supported 00:12:25.729 Multi-Domain Subsystem: Not Supported 00:12:25.729 Fixed Capacity Management: Not Supported 00:12:25.729 Variable Capacity Management: Not Supported 00:12:25.729 Delete Endurance Group: Not Supported 00:12:25.729 Delete NVM Set: Not Supported 00:12:25.729 Extended LBA Formats Supported: Not Supported 00:12:25.729 Flexible Data Placement Supported: Not Supported 00:12:25.729 00:12:25.729 Controller Memory Buffer Support 00:12:25.729 ================================ 00:12:25.729 Supported: No 00:12:25.729 00:12:25.729 Persistent Memory Region Support 00:12:25.729 ================================ 00:12:25.729 Supported: No 00:12:25.729 00:12:25.729 Admin Command Set Attributes 00:12:25.729 ============================ 00:12:25.729 Security Send/Receive: Not Supported 00:12:25.729 Format NVM: Not Supported 00:12:25.729 Firmware Activate/Download: Not Supported 00:12:25.729 Namespace Management: Not Supported 00:12:25.729 Device Self-Test: Not Supported 00:12:25.729 Directives: Not Supported 00:12:25.729 NVMe-MI: Not Supported 00:12:25.729 Virtualization Management: Not Supported 00:12:25.729 Doorbell Buffer Config: Not Supported 00:12:25.729 Get LBA Status Capability: Not Supported 00:12:25.729 Command & Feature Lockdown Capability: Not Supported 00:12:25.729 Abort Command Limit: 4 00:12:25.729 Async Event Request Limit: 4 00:12:25.729 Number of Firmware Slots: N/A 00:12:25.729 Firmware Slot 1 Read-Only: N/A 00:12:25.729 Firmware Activation Without Reset: N/A 00:12:25.729 Multiple Update Detection Support: N/A 00:12:25.729 Firmware Update Granularity: No Information Provided 00:12:25.730 Per-Namespace SMART Log: No 00:12:25.730 Asymmetric Namespace Access Log Page: Not Supported 00:12:25.730 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:25.730 Command Effects Log Page: Supported 00:12:25.730 Get Log Page Extended Data: Supported 00:12:25.730 Telemetry Log Pages: Not Supported 00:12:25.730 Persistent Event Log Pages: Not Supported 00:12:25.730 Supported Log Pages Log Page: May Support 00:12:25.730 Commands Supported & Effects Log Page: Not Supported 00:12:25.730 Feature Identifiers & Effects Log Page:May Support 00:12:25.730 NVMe-MI Commands & Effects Log Page: May Support 00:12:25.730 Data Area 4 for Telemetry Log: Not Supported 00:12:25.730 Error Log Page Entries Supported: 128 00:12:25.730 Keep Alive: Supported 00:12:25.730 Keep Alive Granularity: 10000 ms 00:12:25.730 00:12:25.730 NVM Command Set Attributes 00:12:25.730 ========================== 00:12:25.730 Submission Queue Entry Size 00:12:25.730 Max: 64 00:12:25.730 Min: 64 00:12:25.730 Completion Queue Entry Size 00:12:25.730 Max: 16 00:12:25.730 Min: 16 00:12:25.730 Number of Namespaces: 32 00:12:25.730 Compare Command: Supported 00:12:25.730 Write Uncorrectable Command: Not Supported 00:12:25.730 Dataset Management Command: Supported 00:12:25.730 Write Zeroes Command: Supported 00:12:25.730 Set Features Save Field: Not Supported 00:12:25.730 Reservations: Not Supported 00:12:25.730 Timestamp: Not Supported 00:12:25.730 Copy: Supported 00:12:25.730 Volatile Write Cache: Present 00:12:25.730 Atomic Write Unit (Normal): 1 00:12:25.730 Atomic Write Unit (PFail): 1 00:12:25.730 Atomic Compare & Write Unit: 1 00:12:25.730 Fused Compare & Write: Supported 00:12:25.730 Scatter-Gather List 00:12:25.730 SGL Command Set: Supported (Dword aligned) 00:12:25.730 SGL Keyed: Not Supported 00:12:25.730 SGL Bit Bucket Descriptor: Not Supported 00:12:25.730 SGL Metadata Pointer: Not Supported 00:12:25.730 Oversized SGL: Not Supported 00:12:25.730 SGL Metadata Address: Not Supported 00:12:25.730 SGL Offset: Not Supported 00:12:25.730 Transport SGL Data Block: Not Supported 00:12:25.730 Replay Protected Memory Block: Not Supported 00:12:25.730 00:12:25.730 Firmware Slot Information 00:12:25.730 ========================= 00:12:25.730 Active slot: 1 00:12:25.730 Slot 1 Firmware Revision: 24.09 00:12:25.730 00:12:25.730 00:12:25.730 Commands Supported and Effects 00:12:25.730 ============================== 00:12:25.730 Admin Commands 00:12:25.730 -------------- 00:12:25.730 Get Log Page (02h): Supported 00:12:25.730 Identify (06h): Supported 00:12:25.730 Abort (08h): Supported 00:12:25.730 Set Features (09h): Supported 00:12:25.730 Get Features (0Ah): Supported 00:12:25.730 Asynchronous Event Request (0Ch): Supported 00:12:25.730 Keep Alive (18h): Supported 00:12:25.730 I/O Commands 00:12:25.730 ------------ 00:12:25.730 Flush (00h): Supported LBA-Change 00:12:25.730 Write (01h): Supported LBA-Change 00:12:25.730 Read (02h): Supported 00:12:25.730 Compare (05h): Supported 00:12:25.730 Write Zeroes (08h): Supported LBA-Change 00:12:25.730 Dataset Management (09h): Supported LBA-Change 00:12:25.730 Copy (19h): Supported LBA-Change 00:12:25.730 00:12:25.730 Error Log 00:12:25.730 ========= 00:12:25.730 00:12:25.730 Arbitration 00:12:25.730 =========== 00:12:25.730 Arbitration Burst: 1 00:12:25.730 00:12:25.730 Power Management 00:12:25.730 ================ 00:12:25.730 Number of Power States: 1 00:12:25.730 Current Power State: Power State #0 00:12:25.730 Power State #0: 00:12:25.730 Max Power: 0.00 W 00:12:25.730 Non-Operational State: Operational 00:12:25.730 Entry Latency: Not Reported 00:12:25.730 Exit Latency: Not Reported 00:12:25.730 Relative Read Throughput: 0 00:12:25.730 Relative Read Latency: 0 00:12:25.730 Relative Write Throughput: 0 00:12:25.730 Relative Write Latency: 0 00:12:25.730 Idle Power: Not Reported 00:12:25.730 Active Power: Not Reported 00:12:25.730 Non-Operational Permissive Mode: Not Supported 00:12:25.730 00:12:25.730 Health Information 00:12:25.730 ================== 00:12:25.730 Critical Warnings: 00:12:25.730 Available Spare Space: OK 00:12:25.730 Temperature: OK 00:12:25.730 Device Reliability: OK 00:12:25.730 Read Only: No 00:12:25.730 Volatile Memory Backup: OK 00:12:25.730 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:25.730 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:25.730 Available Spare: 0% 00:12:25.730 Available Sp[2024-07-15 12:57:47.320653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:25.730 [2024-07-15 12:57:47.320664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:25.730 [2024-07-15 12:57:47.320691] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:25.730 [2024-07-15 12:57:47.320701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.730 [2024-07-15 12:57:47.320707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.730 [2024-07-15 12:57:47.320713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.730 [2024-07-15 12:57:47.320719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.730 [2024-07-15 12:57:47.320802] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.730 [2024-07-15 12:57:47.320813] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:25.730 [2024-07-15 12:57:47.321810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.730 [2024-07-15 12:57:47.321852] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:25.730 [2024-07-15 12:57:47.321858] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:25.730 [2024-07-15 12:57:47.322815] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:25.730 [2024-07-15 12:57:47.322826] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:25.730 [2024-07-15 12:57:47.322886] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:25.730 [2024-07-15 12:57:47.326238] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.730 are Threshold: 0% 00:12:25.730 Life Percentage Used: 0% 00:12:25.730 Data Units Read: 0 00:12:25.730 Data Units Written: 0 00:12:25.730 Host Read Commands: 0 00:12:25.730 Host Write Commands: 0 00:12:25.730 Controller Busy Time: 0 minutes 00:12:25.730 Power Cycles: 0 00:12:25.730 Power On Hours: 0 hours 00:12:25.730 Unsafe Shutdowns: 0 00:12:25.730 Unrecoverable Media Errors: 0 00:12:25.730 Lifetime Error Log Entries: 0 00:12:25.730 Warning Temperature Time: 0 minutes 00:12:25.730 Critical Temperature Time: 0 minutes 00:12:25.730 00:12:25.730 Number of Queues 00:12:25.730 ================ 00:12:25.730 Number of I/O Submission Queues: 127 00:12:25.730 Number of I/O Completion Queues: 127 00:12:25.730 00:12:25.730 Active Namespaces 00:12:25.730 ================= 00:12:25.730 Namespace ID:1 00:12:25.730 Error Recovery Timeout: Unlimited 00:12:25.730 Command Set Identifier: NVM (00h) 00:12:25.730 Deallocate: Supported 00:12:25.730 Deallocated/Unwritten Error: Not Supported 00:12:25.730 Deallocated Read Value: Unknown 00:12:25.730 Deallocate in Write Zeroes: Not Supported 00:12:25.730 Deallocated Guard Field: 0xFFFF 00:12:25.730 Flush: Supported 00:12:25.730 Reservation: Supported 00:12:25.730 Namespace Sharing Capabilities: Multiple Controllers 00:12:25.730 Size (in LBAs): 131072 (0GiB) 00:12:25.730 Capacity (in LBAs): 131072 (0GiB) 00:12:25.730 Utilization (in LBAs): 131072 (0GiB) 00:12:25.730 NGUID: F19A04F71CDC4DDD9A57479A95D8674F 00:12:25.730 UUID: f19a04f7-1cdc-4ddd-9a57-479a95d8674f 00:12:25.730 Thin Provisioning: Not Supported 00:12:25.730 Per-NS Atomic Units: Yes 00:12:25.730 Atomic Boundary Size (Normal): 0 00:12:25.730 Atomic Boundary Size (PFail): 0 00:12:25.730 Atomic Boundary Offset: 0 00:12:25.730 Maximum Single Source Range Length: 65535 00:12:25.730 Maximum Copy Length: 65535 00:12:25.730 Maximum Source Range Count: 1 00:12:25.730 NGUID/EUI64 Never Reused: No 00:12:25.730 Namespace Write Protected: No 00:12:25.730 Number of LBA Formats: 1 00:12:25.730 Current LBA Format: LBA Format #00 00:12:25.730 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:25.730 00:12:25.730 12:57:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:25.730 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.730 [2024-07-15 12:57:47.509836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:31.019 Initializing NVMe Controllers 00:12:31.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:31.020 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:31.020 Initialization complete. Launching workers. 00:12:31.020 ======================================================== 00:12:31.020 Latency(us) 00:12:31.020 Device Information : IOPS MiB/s Average min max 00:12:31.020 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39942.87 156.03 3204.26 829.96 8323.18 00:12:31.020 ======================================================== 00:12:31.020 Total : 39942.87 156.03 3204.26 829.96 8323.18 00:12:31.020 00:12:31.020 [2024-07-15 12:57:52.527776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:31.020 12:57:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:31.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.020 [2024-07-15 12:57:52.706604] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.316 Initializing NVMe Controllers 00:12:36.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:36.316 Initialization complete. Launching workers. 00:12:36.316 ======================================================== 00:12:36.316 Latency(us) 00:12:36.316 Device Information : IOPS MiB/s Average min max 00:12:36.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16040.75 62.66 7979.18 6467.77 8497.36 00:12:36.316 ======================================================== 00:12:36.316 Total : 16040.75 62.66 7979.18 6467.77 8497.36 00:12:36.316 00:12:36.316 [2024-07-15 12:57:57.742312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.316 12:57:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.316 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.316 [2024-07-15 12:57:57.942176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.623 [2024-07-15 12:58:03.007414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.623 Initializing NVMe Controllers 00:12:41.623 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.623 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:41.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:41.623 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:41.623 Initialization complete. Launching workers. 00:12:41.623 Starting thread on core 2 00:12:41.623 Starting thread on core 3 00:12:41.623 Starting thread on core 1 00:12:41.623 12:58:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:41.623 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.623 [2024-07-15 12:58:03.284622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.918 [2024-07-15 12:58:06.337167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.918 Initializing NVMe Controllers 00:12:44.918 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.918 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:44.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:44.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:44.918 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:44.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:44.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:44.918 Initialization complete. Launching workers. 00:12:44.918 Starting thread on core 1 with urgent priority queue 00:12:44.918 Starting thread on core 2 with urgent priority queue 00:12:44.918 Starting thread on core 3 with urgent priority queue 00:12:44.918 Starting thread on core 0 with urgent priority queue 00:12:44.918 SPDK bdev Controller (SPDK1 ) core 0: 14066.33 IO/s 7.11 secs/100000 ios 00:12:44.918 SPDK bdev Controller (SPDK1 ) core 1: 9991.33 IO/s 10.01 secs/100000 ios 00:12:44.918 SPDK bdev Controller (SPDK1 ) core 2: 12395.00 IO/s 8.07 secs/100000 ios 00:12:44.918 SPDK bdev Controller (SPDK1 ) core 3: 7780.67 IO/s 12.85 secs/100000 ios 00:12:44.918 ======================================================== 00:12:44.918 00:12:44.918 12:58:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:44.918 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.918 [2024-07-15 12:58:06.606695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.918 Initializing NVMe Controllers 00:12:44.918 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.918 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.918 Namespace ID: 1 size: 0GB 00:12:44.918 Initialization complete. 00:12:44.918 INFO: using host memory buffer for IO 00:12:44.918 Hello world! 00:12:44.919 [2024-07-15 12:58:06.639888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.919 12:58:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:45.178 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.178 [2024-07-15 12:58:06.906122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.119 Initializing NVMe Controllers 00:12:46.119 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.119 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.119 Initialization complete. Launching workers. 00:12:46.119 submit (in ns) avg, min, max = 8419.3, 3913.3, 4000316.7 00:12:46.119 complete (in ns) avg, min, max = 17147.4, 2380.8, 6988800.0 00:12:46.119 00:12:46.119 Submit histogram 00:12:46.119 ================ 00:12:46.119 Range in us Cumulative Count 00:12:46.119 3.893 - 3.920: 0.0467% ( 9) 00:12:46.119 3.920 - 3.947: 2.5249% ( 478) 00:12:46.119 3.947 - 3.973: 8.7153% ( 1194) 00:12:46.119 3.973 - 4.000: 19.2036% ( 2023) 00:12:46.119 4.000 - 4.027: 30.4645% ( 2172) 00:12:46.119 4.027 - 4.053: 43.3793% ( 2491) 00:12:46.119 4.053 - 4.080: 59.3219% ( 3075) 00:12:46.119 4.080 - 4.107: 75.3629% ( 3094) 00:12:46.119 4.107 - 4.133: 87.7022% ( 2380) 00:12:46.119 4.133 - 4.160: 94.5406% ( 1319) 00:12:46.119 4.160 - 4.187: 97.8484% ( 638) 00:12:46.119 4.187 - 4.213: 99.0823% ( 238) 00:12:46.119 4.213 - 4.240: 99.4608% ( 73) 00:12:46.119 4.240 - 4.267: 99.5127% ( 10) 00:12:46.119 4.267 - 4.293: 99.5334% ( 4) 00:12:46.119 4.293 - 4.320: 99.5438% ( 2) 00:12:46.119 4.720 - 4.747: 99.5489% ( 1) 00:12:46.119 4.880 - 4.907: 99.5541% ( 1) 00:12:46.119 5.093 - 5.120: 99.5593% ( 1) 00:12:46.119 5.200 - 5.227: 99.5697% ( 2) 00:12:46.119 5.280 - 5.307: 99.5749% ( 1) 00:12:46.119 5.467 - 5.493: 99.5800% ( 1) 00:12:46.119 5.760 - 5.787: 99.5904% ( 2) 00:12:46.119 5.840 - 5.867: 99.5956% ( 1) 00:12:46.119 6.080 - 6.107: 99.6008% ( 1) 00:12:46.119 6.160 - 6.187: 99.6163% ( 3) 00:12:46.119 6.213 - 6.240: 99.6267% ( 2) 00:12:46.119 6.240 - 6.267: 99.6319% ( 1) 00:12:46.119 6.267 - 6.293: 99.6371% ( 1) 00:12:46.119 6.320 - 6.347: 99.6423% ( 1) 00:12:46.119 6.347 - 6.373: 99.6526% ( 2) 00:12:46.119 6.373 - 6.400: 99.6578% ( 1) 00:12:46.119 6.400 - 6.427: 99.6630% ( 1) 00:12:46.119 6.453 - 6.480: 99.6682% ( 1) 00:12:46.119 6.480 - 6.507: 99.6734% ( 1) 00:12:46.119 6.507 - 6.533: 99.6889% ( 3) 00:12:46.119 6.533 - 6.560: 99.6993% ( 2) 00:12:46.119 6.560 - 6.587: 99.7045% ( 1) 00:12:46.119 6.587 - 6.613: 99.7097% ( 1) 00:12:46.119 6.613 - 6.640: 99.7148% ( 1) 00:12:46.119 6.720 - 6.747: 99.7200% ( 1) 00:12:46.119 6.747 - 6.773: 99.7304% ( 2) 00:12:46.119 6.773 - 6.800: 99.7356% ( 1) 00:12:46.119 6.800 - 6.827: 99.7408% ( 1) 00:12:46.119 6.827 - 6.880: 99.7511% ( 2) 00:12:46.119 6.880 - 6.933: 99.7667% ( 3) 00:12:46.119 6.933 - 6.987: 99.7719% ( 1) 00:12:46.119 6.987 - 7.040: 99.7771% ( 1) 00:12:46.119 7.040 - 7.093: 99.7926% ( 3) 00:12:46.119 7.093 - 7.147: 99.8030% ( 2) 00:12:46.119 7.253 - 7.307: 99.8134% ( 2) 00:12:46.119 7.307 - 7.360: 99.8237% ( 2) 00:12:46.119 7.413 - 7.467: 99.8289% ( 1) 00:12:46.119 7.467 - 7.520: 99.8341% ( 1) 00:12:46.119 7.520 - 7.573: 99.8445% ( 2) 00:12:46.119 7.573 - 7.627: 99.8496% ( 1) 00:12:46.119 7.680 - 7.733: 99.8548% ( 1) 00:12:46.119 7.733 - 7.787: 99.8652% ( 2) 00:12:46.119 7.787 - 7.840: 99.8704% ( 1) 00:12:46.119 7.893 - 7.947: 99.8808% ( 2) 00:12:46.119 8.053 - 8.107: 99.8859% ( 1) 00:12:46.119 13.867 - 13.973: 99.8911% ( 1) 00:12:46.119 3986.773 - 4014.080: 100.0000% ( 21) 00:12:46.119 00:12:46.119 Complete histogram 00:12:46.119 ================== 00:12:46.119 Range in us Cumulative Count 00:12:46.119 2.373 - 2.387: 0.0052% ( 1) 00:12:46.119 2.387 - 2.400: 0.0207% ( 3) 00:12:46.119 2.400 - 2.413: 0.3629% ( 66) 00:12:46.119 2.413 - 2.427: 0.7829% ( 81) 00:12:46.119 2.427 - 2.440: 0.9177% ( 26) 00:12:46.119 2.440 - 2.453: 0.9903% ( 14) 00:12:46.119 2.453 - 2.467: 49.3571% ( 9329) 00:12:46.119 2.467 - 2.480: 58.6634% ( 1795) 00:12:46.119 2.480 - 2.493: 71.5523% ( 2486) 00:12:46.119 2.493 - 2.507: 78.9818% ( 1433) 00:12:46.119 2.507 - 2.520: 81.4081% ( 468) 00:12:46.119 2.520 - 2.533: 84.3115% ( 560) 00:12:46.119 2.533 - 2.547: 91.1707% ( 1323) 00:12:46.119 2.547 - 2.560: 95.0124% ( 741) 00:12:46.119 2.560 - [2024-07-15 12:58:07.923570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.381 2.573: 97.1485% ( 412) 00:12:46.381 2.573 - 2.587: 98.6987% ( 299) 00:12:46.381 2.587 - 2.600: 99.2793% ( 112) 00:12:46.381 2.600 - 2.613: 99.3882% ( 21) 00:12:46.381 2.640 - 2.653: 99.3934% ( 1) 00:12:46.381 2.680 - 2.693: 99.3986% ( 1) 00:12:46.381 2.760 - 2.773: 99.4038% ( 1) 00:12:46.381 4.613 - 4.640: 99.4193% ( 3) 00:12:46.381 4.693 - 4.720: 99.4297% ( 2) 00:12:46.381 4.720 - 4.747: 99.4349% ( 1) 00:12:46.381 4.747 - 4.773: 99.4401% ( 1) 00:12:46.381 4.800 - 4.827: 99.4453% ( 1) 00:12:46.381 4.827 - 4.853: 99.4504% ( 1) 00:12:46.381 4.907 - 4.933: 99.4608% ( 2) 00:12:46.381 4.960 - 4.987: 99.4660% ( 1) 00:12:46.381 4.987 - 5.013: 99.4712% ( 1) 00:12:46.381 5.040 - 5.067: 99.4764% ( 1) 00:12:46.381 5.093 - 5.120: 99.4815% ( 1) 00:12:46.381 5.147 - 5.173: 99.4867% ( 1) 00:12:46.381 5.173 - 5.200: 99.4919% ( 1) 00:12:46.381 5.200 - 5.227: 99.5023% ( 2) 00:12:46.381 5.227 - 5.253: 99.5127% ( 2) 00:12:46.381 5.253 - 5.280: 99.5178% ( 1) 00:12:46.381 5.280 - 5.307: 99.5230% ( 1) 00:12:46.381 5.333 - 5.360: 99.5282% ( 1) 00:12:46.381 5.360 - 5.387: 99.5334% ( 1) 00:12:46.381 5.387 - 5.413: 99.5386% ( 1) 00:12:46.381 5.413 - 5.440: 99.5438% ( 1) 00:12:46.381 5.440 - 5.467: 99.5489% ( 1) 00:12:46.381 5.467 - 5.493: 99.5541% ( 1) 00:12:46.381 5.493 - 5.520: 99.5645% ( 2) 00:12:46.381 5.547 - 5.573: 99.5697% ( 1) 00:12:46.381 5.680 - 5.707: 99.5749% ( 1) 00:12:46.381 5.813 - 5.840: 99.5852% ( 2) 00:12:46.381 5.893 - 5.920: 99.5904% ( 1) 00:12:46.381 5.973 - 6.000: 99.5956% ( 1) 00:12:46.381 6.400 - 6.427: 99.6008% ( 1) 00:12:46.381 6.827 - 6.880: 99.6060% ( 1) 00:12:46.381 7.147 - 7.200: 99.6112% ( 1) 00:12:46.381 7.307 - 7.360: 99.6163% ( 1) 00:12:46.381 11.947 - 12.000: 99.6215% ( 1) 00:12:46.381 12.107 - 12.160: 99.6267% ( 1) 00:12:46.381 13.120 - 13.173: 99.6319% ( 1) 00:12:46.381 43.520 - 43.733: 99.6371% ( 1) 00:12:46.381 3986.773 - 4014.080: 99.9948% ( 69) 00:12:46.381 6963.200 - 6990.507: 100.0000% ( 1) 00:12:46.381 00:12:46.381 12:58:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:46.381 12:58:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:46.381 12:58:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:46.381 12:58:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:46.381 12:58:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.381 [ 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.381 "subtype": "Discovery", 00:12:46.381 "listen_addresses": [], 00:12:46.381 "allow_any_host": true, 00:12:46.381 "hosts": [] 00:12:46.381 }, 00:12:46.381 { 00:12:46.381 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:46.381 "subtype": "NVMe", 00:12:46.381 "listen_addresses": [ 00:12:46.381 { 00:12:46.381 "trtype": "VFIOUSER", 00:12:46.381 "adrfam": "IPv4", 00:12:46.381 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:46.381 "trsvcid": "0" 00:12:46.381 } 00:12:46.381 ], 00:12:46.381 "allow_any_host": true, 00:12:46.381 "hosts": [], 00:12:46.381 "serial_number": "SPDK1", 00:12:46.381 "model_number": "SPDK bdev Controller", 00:12:46.381 "max_namespaces": 32, 00:12:46.381 "min_cntlid": 1, 00:12:46.381 "max_cntlid": 65519, 00:12:46.381 "namespaces": [ 00:12:46.381 { 00:12:46.381 "nsid": 1, 00:12:46.381 "bdev_name": "Malloc1", 00:12:46.381 "name": "Malloc1", 00:12:46.382 "nguid": "F19A04F71CDC4DDD9A57479A95D8674F", 00:12:46.382 "uuid": "f19a04f7-1cdc-4ddd-9a57-479a95d8674f" 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 }, 00:12:46.382 { 00:12:46.382 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:46.382 "subtype": "NVMe", 00:12:46.382 "listen_addresses": [ 00:12:46.382 { 00:12:46.382 "trtype": "VFIOUSER", 00:12:46.382 "adrfam": "IPv4", 00:12:46.382 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:46.382 "trsvcid": "0" 00:12:46.382 } 00:12:46.382 ], 00:12:46.382 "allow_any_host": true, 00:12:46.382 "hosts": [], 00:12:46.382 "serial_number": "SPDK2", 00:12:46.382 "model_number": "SPDK bdev Controller", 00:12:46.382 "max_namespaces": 32, 00:12:46.382 "min_cntlid": 1, 00:12:46.382 "max_cntlid": 65519, 00:12:46.382 "namespaces": [ 00:12:46.382 { 00:12:46.382 "nsid": 1, 00:12:46.382 "bdev_name": "Malloc2", 00:12:46.382 "name": "Malloc2", 00:12:46.382 "nguid": "D72F6F66392C4438BAC31BFA5F22F929", 00:12:46.382 "uuid": "d72f6f66-392c-4438-bac3-1bfa5f22f929" 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=592109 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:46.382 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:46.382 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.642 Malloc3 00:12:46.642 [2024-07-15 12:58:08.312668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.642 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:46.986 [2024-07-15 12:58:08.474806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.986 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.986 Asynchronous Event Request test 00:12:46.986 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.986 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.986 Registering asynchronous event callbacks... 00:12:46.987 Starting namespace attribute notice tests for all controllers... 00:12:46.987 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:46.987 aer_cb - Changed Namespace 00:12:46.987 Cleaning up... 00:12:46.987 [ 00:12:46.987 { 00:12:46.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.987 "subtype": "Discovery", 00:12:46.987 "listen_addresses": [], 00:12:46.987 "allow_any_host": true, 00:12:46.987 "hosts": [] 00:12:46.987 }, 00:12:46.987 { 00:12:46.987 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:46.987 "subtype": "NVMe", 00:12:46.987 "listen_addresses": [ 00:12:46.987 { 00:12:46.987 "trtype": "VFIOUSER", 00:12:46.987 "adrfam": "IPv4", 00:12:46.987 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:46.987 "trsvcid": "0" 00:12:46.987 } 00:12:46.987 ], 00:12:46.987 "allow_any_host": true, 00:12:46.987 "hosts": [], 00:12:46.987 "serial_number": "SPDK1", 00:12:46.987 "model_number": "SPDK bdev Controller", 00:12:46.987 "max_namespaces": 32, 00:12:46.987 "min_cntlid": 1, 00:12:46.987 "max_cntlid": 65519, 00:12:46.987 "namespaces": [ 00:12:46.987 { 00:12:46.987 "nsid": 1, 00:12:46.987 "bdev_name": "Malloc1", 00:12:46.987 "name": "Malloc1", 00:12:46.987 "nguid": "F19A04F71CDC4DDD9A57479A95D8674F", 00:12:46.987 "uuid": "f19a04f7-1cdc-4ddd-9a57-479a95d8674f" 00:12:46.987 }, 00:12:46.987 { 00:12:46.987 "nsid": 2, 00:12:46.987 "bdev_name": "Malloc3", 00:12:46.987 "name": "Malloc3", 00:12:46.987 "nguid": "AD8019F22F1547508A93EB7ECC5F1544", 00:12:46.987 "uuid": "ad8019f2-2f15-4750-8a93-eb7ecc5f1544" 00:12:46.987 } 00:12:46.987 ] 00:12:46.987 }, 00:12:46.987 { 00:12:46.987 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:46.987 "subtype": "NVMe", 00:12:46.987 "listen_addresses": [ 00:12:46.987 { 00:12:46.987 "trtype": "VFIOUSER", 00:12:46.987 "adrfam": "IPv4", 00:12:46.987 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:46.987 "trsvcid": "0" 00:12:46.987 } 00:12:46.987 ], 00:12:46.987 "allow_any_host": true, 00:12:46.987 "hosts": [], 00:12:46.987 "serial_number": "SPDK2", 00:12:46.987 "model_number": "SPDK bdev Controller", 00:12:46.987 "max_namespaces": 32, 00:12:46.987 "min_cntlid": 1, 00:12:46.987 "max_cntlid": 65519, 00:12:46.987 "namespaces": [ 00:12:46.987 { 00:12:46.987 "nsid": 1, 00:12:46.987 "bdev_name": "Malloc2", 00:12:46.987 "name": "Malloc2", 00:12:46.987 "nguid": "D72F6F66392C4438BAC31BFA5F22F929", 00:12:46.987 "uuid": "d72f6f66-392c-4438-bac3-1bfa5f22f929" 00:12:46.987 } 00:12:46.987 ] 00:12:46.987 } 00:12:46.987 ] 00:12:46.987 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 592109 00:12:46.987 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:46.987 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:46.987 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:46.987 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:46.987 [2024-07-15 12:58:08.683593] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:12:46.987 [2024-07-15 12:58:08.683663] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid592251 ] 00:12:46.987 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.987 [2024-07-15 12:58:08.717786] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:46.987 [2024-07-15 12:58:08.722439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:46.987 [2024-07-15 12:58:08.722460] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3e6d3ca000 00:12:46.987 [2024-07-15 12:58:08.723441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.724441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.725454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.726456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.727459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.728462] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.729469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.730475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.987 [2024-07-15 12:58:08.731481] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:46.987 [2024-07-15 12:58:08.731491] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3e6d3bf000 00:12:46.987 [2024-07-15 12:58:08.732814] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:46.987 [2024-07-15 12:58:08.753392] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:46.987 [2024-07-15 12:58:08.753415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:46.987 [2024-07-15 12:58:08.755470] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:46.987 [2024-07-15 12:58:08.755520] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:46.987 [2024-07-15 12:58:08.755599] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:46.987 [2024-07-15 12:58:08.755616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:46.987 [2024-07-15 12:58:08.755621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:46.987 [2024-07-15 12:58:08.756471] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:46.987 [2024-07-15 12:58:08.756482] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:46.987 [2024-07-15 12:58:08.756489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:46.987 [2024-07-15 12:58:08.757484] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:46.987 [2024-07-15 12:58:08.757493] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:46.987 [2024-07-15 12:58:08.757501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.758491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:46.987 [2024-07-15 12:58:08.758501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.759497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:46.987 [2024-07-15 12:58:08.759505] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:46.987 [2024-07-15 12:58:08.759510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.759517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.759622] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:46.987 [2024-07-15 12:58:08.759627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.759632] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:46.987 [2024-07-15 12:58:08.760501] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:46.987 [2024-07-15 12:58:08.761503] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:46.987 [2024-07-15 12:58:08.762507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:46.987 [2024-07-15 12:58:08.763513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.987 [2024-07-15 12:58:08.763552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:46.987 [2024-07-15 12:58:08.764528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:46.987 [2024-07-15 12:58:08.764537] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:46.987 [2024-07-15 12:58:08.764542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:46.987 [2024-07-15 12:58:08.764563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:46.987 [2024-07-15 12:58:08.764575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:46.987 [2024-07-15 12:58:08.764588] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:46.987 [2024-07-15 12:58:08.764593] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:46.987 [2024-07-15 12:58:08.764606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.775238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.775250] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:47.282 [2024-07-15 12:58:08.775257] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:47.282 [2024-07-15 12:58:08.775265] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:47.282 [2024-07-15 12:58:08.775269] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:47.282 [2024-07-15 12:58:08.775274] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:47.282 [2024-07-15 12:58:08.775278] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:47.282 [2024-07-15 12:58:08.775283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.775290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.775301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.783236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.783251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.282 [2024-07-15 12:58:08.783260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.282 [2024-07-15 12:58:08.783268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.282 [2024-07-15 12:58:08.783276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.282 [2024-07-15 12:58:08.783281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.783289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.783298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.791236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.791244] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:47.282 [2024-07-15 12:58:08.791248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.791255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.791260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.791269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.799236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.799299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.799307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.799315] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:47.282 [2024-07-15 12:58:08.799323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:47.282 [2024-07-15 12:58:08.799329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.807237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.807249] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:47.282 [2024-07-15 12:58:08.807258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.807266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.807273] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.282 [2024-07-15 12:58:08.807277] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.282 [2024-07-15 12:58:08.807283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.815236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.815250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.815258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.815266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.282 [2024-07-15 12:58:08.815270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.282 [2024-07-15 12:58:08.815276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.282 [2024-07-15 12:58:08.823236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:47.282 [2024-07-15 12:58:08.823245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.823252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.823260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:47.282 [2024-07-15 12:58:08.823266] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:47.283 [2024-07-15 12:58:08.823271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:47.283 [2024-07-15 12:58:08.823275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:47.283 [2024-07-15 12:58:08.823280] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:47.283 [2024-07-15 12:58:08.823285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:47.283 [2024-07-15 12:58:08.823290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:47.283 [2024-07-15 12:58:08.823308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.831237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.831251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.839237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.839250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.847250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.855237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.855255] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:47.283 [2024-07-15 12:58:08.855259] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:47.283 [2024-07-15 12:58:08.855263] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:47.283 [2024-07-15 12:58:08.855266] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:47.283 [2024-07-15 12:58:08.855273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:47.283 [2024-07-15 12:58:08.855280] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:47.283 [2024-07-15 12:58:08.855284] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:47.283 [2024-07-15 12:58:08.855290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.855298] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:47.283 [2024-07-15 12:58:08.855302] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.283 [2024-07-15 12:58:08.855307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.855315] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:47.283 [2024-07-15 12:58:08.855319] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:47.283 [2024-07-15 12:58:08.855325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:47.283 [2024-07-15 12:58:08.863235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.863261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:47.283 [2024-07-15 12:58:08.863268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:47.283 ===================================================== 00:12:47.283 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:47.283 ===================================================== 00:12:47.283 Controller Capabilities/Features 00:12:47.283 ================================ 00:12:47.283 Vendor ID: 4e58 00:12:47.283 Subsystem Vendor ID: 4e58 00:12:47.283 Serial Number: SPDK2 00:12:47.283 Model Number: SPDK bdev Controller 00:12:47.283 Firmware Version: 24.09 00:12:47.283 Recommended Arb Burst: 6 00:12:47.283 IEEE OUI Identifier: 8d 6b 50 00:12:47.283 Multi-path I/O 00:12:47.283 May have multiple subsystem ports: Yes 00:12:47.283 May have multiple controllers: Yes 00:12:47.283 Associated with SR-IOV VF: No 00:12:47.283 Max Data Transfer Size: 131072 00:12:47.283 Max Number of Namespaces: 32 00:12:47.283 Max Number of I/O Queues: 127 00:12:47.283 NVMe Specification Version (VS): 1.3 00:12:47.283 NVMe Specification Version (Identify): 1.3 00:12:47.283 Maximum Queue Entries: 256 00:12:47.283 Contiguous Queues Required: Yes 00:12:47.283 Arbitration Mechanisms Supported 00:12:47.283 Weighted Round Robin: Not Supported 00:12:47.283 Vendor Specific: Not Supported 00:12:47.283 Reset Timeout: 15000 ms 00:12:47.283 Doorbell Stride: 4 bytes 00:12:47.283 NVM Subsystem Reset: Not Supported 00:12:47.283 Command Sets Supported 00:12:47.283 NVM Command Set: Supported 00:12:47.283 Boot Partition: Not Supported 00:12:47.283 Memory Page Size Minimum: 4096 bytes 00:12:47.283 Memory Page Size Maximum: 4096 bytes 00:12:47.283 Persistent Memory Region: Not Supported 00:12:47.283 Optional Asynchronous Events Supported 00:12:47.283 Namespace Attribute Notices: Supported 00:12:47.283 Firmware Activation Notices: Not Supported 00:12:47.283 ANA Change Notices: Not Supported 00:12:47.283 PLE Aggregate Log Change Notices: Not Supported 00:12:47.283 LBA Status Info Alert Notices: Not Supported 00:12:47.283 EGE Aggregate Log Change Notices: Not Supported 00:12:47.283 Normal NVM Subsystem Shutdown event: Not Supported 00:12:47.283 Zone Descriptor Change Notices: Not Supported 00:12:47.283 Discovery Log Change Notices: Not Supported 00:12:47.283 Controller Attributes 00:12:47.283 128-bit Host Identifier: Supported 00:12:47.283 Non-Operational Permissive Mode: Not Supported 00:12:47.283 NVM Sets: Not Supported 00:12:47.283 Read Recovery Levels: Not Supported 00:12:47.283 Endurance Groups: Not Supported 00:12:47.283 Predictable Latency Mode: Not Supported 00:12:47.283 Traffic Based Keep ALive: Not Supported 00:12:47.283 Namespace Granularity: Not Supported 00:12:47.283 SQ Associations: Not Supported 00:12:47.283 UUID List: Not Supported 00:12:47.283 Multi-Domain Subsystem: Not Supported 00:12:47.283 Fixed Capacity Management: Not Supported 00:12:47.283 Variable Capacity Management: Not Supported 00:12:47.283 Delete Endurance Group: Not Supported 00:12:47.283 Delete NVM Set: Not Supported 00:12:47.283 Extended LBA Formats Supported: Not Supported 00:12:47.283 Flexible Data Placement Supported: Not Supported 00:12:47.283 00:12:47.283 Controller Memory Buffer Support 00:12:47.283 ================================ 00:12:47.283 Supported: No 00:12:47.283 00:12:47.283 Persistent Memory Region Support 00:12:47.283 ================================ 00:12:47.283 Supported: No 00:12:47.283 00:12:47.283 Admin Command Set Attributes 00:12:47.283 ============================ 00:12:47.283 Security Send/Receive: Not Supported 00:12:47.283 Format NVM: Not Supported 00:12:47.283 Firmware Activate/Download: Not Supported 00:12:47.283 Namespace Management: Not Supported 00:12:47.283 Device Self-Test: Not Supported 00:12:47.283 Directives: Not Supported 00:12:47.283 NVMe-MI: Not Supported 00:12:47.283 Virtualization Management: Not Supported 00:12:47.283 Doorbell Buffer Config: Not Supported 00:12:47.283 Get LBA Status Capability: Not Supported 00:12:47.283 Command & Feature Lockdown Capability: Not Supported 00:12:47.283 Abort Command Limit: 4 00:12:47.283 Async Event Request Limit: 4 00:12:47.283 Number of Firmware Slots: N/A 00:12:47.283 Firmware Slot 1 Read-Only: N/A 00:12:47.283 Firmware Activation Without Reset: N/A 00:12:47.283 Multiple Update Detection Support: N/A 00:12:47.283 Firmware Update Granularity: No Information Provided 00:12:47.283 Per-Namespace SMART Log: No 00:12:47.283 Asymmetric Namespace Access Log Page: Not Supported 00:12:47.283 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:47.283 Command Effects Log Page: Supported 00:12:47.283 Get Log Page Extended Data: Supported 00:12:47.283 Telemetry Log Pages: Not Supported 00:12:47.283 Persistent Event Log Pages: Not Supported 00:12:47.283 Supported Log Pages Log Page: May Support 00:12:47.283 Commands Supported & Effects Log Page: Not Supported 00:12:47.283 Feature Identifiers & Effects Log Page:May Support 00:12:47.283 NVMe-MI Commands & Effects Log Page: May Support 00:12:47.283 Data Area 4 for Telemetry Log: Not Supported 00:12:47.283 Error Log Page Entries Supported: 128 00:12:47.283 Keep Alive: Supported 00:12:47.283 Keep Alive Granularity: 10000 ms 00:12:47.283 00:12:47.283 NVM Command Set Attributes 00:12:47.283 ========================== 00:12:47.283 Submission Queue Entry Size 00:12:47.283 Max: 64 00:12:47.283 Min: 64 00:12:47.283 Completion Queue Entry Size 00:12:47.283 Max: 16 00:12:47.283 Min: 16 00:12:47.283 Number of Namespaces: 32 00:12:47.283 Compare Command: Supported 00:12:47.283 Write Uncorrectable Command: Not Supported 00:12:47.283 Dataset Management Command: Supported 00:12:47.283 Write Zeroes Command: Supported 00:12:47.283 Set Features Save Field: Not Supported 00:12:47.283 Reservations: Not Supported 00:12:47.283 Timestamp: Not Supported 00:12:47.283 Copy: Supported 00:12:47.283 Volatile Write Cache: Present 00:12:47.283 Atomic Write Unit (Normal): 1 00:12:47.283 Atomic Write Unit (PFail): 1 00:12:47.283 Atomic Compare & Write Unit: 1 00:12:47.283 Fused Compare & Write: Supported 00:12:47.283 Scatter-Gather List 00:12:47.283 SGL Command Set: Supported (Dword aligned) 00:12:47.283 SGL Keyed: Not Supported 00:12:47.283 SGL Bit Bucket Descriptor: Not Supported 00:12:47.283 SGL Metadata Pointer: Not Supported 00:12:47.283 Oversized SGL: Not Supported 00:12:47.284 SGL Metadata Address: Not Supported 00:12:47.284 SGL Offset: Not Supported 00:12:47.284 Transport SGL Data Block: Not Supported 00:12:47.284 Replay Protected Memory Block: Not Supported 00:12:47.284 00:12:47.284 Firmware Slot Information 00:12:47.284 ========================= 00:12:47.284 Active slot: 1 00:12:47.284 Slot 1 Firmware Revision: 24.09 00:12:47.284 00:12:47.284 00:12:47.284 Commands Supported and Effects 00:12:47.284 ============================== 00:12:47.284 Admin Commands 00:12:47.284 -------------- 00:12:47.284 Get Log Page (02h): Supported 00:12:47.284 Identify (06h): Supported 00:12:47.284 Abort (08h): Supported 00:12:47.284 Set Features (09h): Supported 00:12:47.284 Get Features (0Ah): Supported 00:12:47.284 Asynchronous Event Request (0Ch): Supported 00:12:47.284 Keep Alive (18h): Supported 00:12:47.284 I/O Commands 00:12:47.284 ------------ 00:12:47.284 Flush (00h): Supported LBA-Change 00:12:47.284 Write (01h): Supported LBA-Change 00:12:47.284 Read (02h): Supported 00:12:47.284 Compare (05h): Supported 00:12:47.284 Write Zeroes (08h): Supported LBA-Change 00:12:47.284 Dataset Management (09h): Supported LBA-Change 00:12:47.284 Copy (19h): Supported LBA-Change 00:12:47.284 00:12:47.284 Error Log 00:12:47.284 ========= 00:12:47.284 00:12:47.284 Arbitration 00:12:47.284 =========== 00:12:47.284 Arbitration Burst: 1 00:12:47.284 00:12:47.284 Power Management 00:12:47.284 ================ 00:12:47.284 Number of Power States: 1 00:12:47.284 Current Power State: Power State #0 00:12:47.284 Power State #0: 00:12:47.284 Max Power: 0.00 W 00:12:47.284 Non-Operational State: Operational 00:12:47.284 Entry Latency: Not Reported 00:12:47.284 Exit Latency: Not Reported 00:12:47.284 Relative Read Throughput: 0 00:12:47.284 Relative Read Latency: 0 00:12:47.284 Relative Write Throughput: 0 00:12:47.284 Relative Write Latency: 0 00:12:47.284 Idle Power: Not Reported 00:12:47.284 Active Power: Not Reported 00:12:47.284 Non-Operational Permissive Mode: Not Supported 00:12:47.284 00:12:47.284 Health Information 00:12:47.284 ================== 00:12:47.284 Critical Warnings: 00:12:47.284 Available Spare Space: OK 00:12:47.284 Temperature: OK 00:12:47.284 Device Reliability: OK 00:12:47.284 Read Only: No 00:12:47.284 Volatile Memory Backup: OK 00:12:47.284 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:47.284 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:47.284 Available Spare: 0% 00:12:47.284 Available Sp[2024-07-15 12:58:08.863364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:47.284 [2024-07-15 12:58:08.871236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:47.284 [2024-07-15 12:58:08.871273] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:47.284 [2024-07-15 12:58:08.871293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.284 [2024-07-15 12:58:08.871300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.284 [2024-07-15 12:58:08.871306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.284 [2024-07-15 12:58:08.871312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.284 [2024-07-15 12:58:08.871352] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:47.284 [2024-07-15 12:58:08.871362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:47.284 [2024-07-15 12:58:08.872352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.284 [2024-07-15 12:58:08.872401] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:47.284 [2024-07-15 12:58:08.872408] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:47.284 [2024-07-15 12:58:08.873352] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:47.284 [2024-07-15 12:58:08.873365] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:47.284 [2024-07-15 12:58:08.873411] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:47.284 [2024-07-15 12:58:08.874784] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.284 are Threshold: 0% 00:12:47.284 Life Percentage Used: 0% 00:12:47.284 Data Units Read: 0 00:12:47.284 Data Units Written: 0 00:12:47.284 Host Read Commands: 0 00:12:47.284 Host Write Commands: 0 00:12:47.284 Controller Busy Time: 0 minutes 00:12:47.284 Power Cycles: 0 00:12:47.284 Power On Hours: 0 hours 00:12:47.284 Unsafe Shutdowns: 0 00:12:47.284 Unrecoverable Media Errors: 0 00:12:47.284 Lifetime Error Log Entries: 0 00:12:47.284 Warning Temperature Time: 0 minutes 00:12:47.284 Critical Temperature Time: 0 minutes 00:12:47.284 00:12:47.284 Number of Queues 00:12:47.284 ================ 00:12:47.284 Number of I/O Submission Queues: 127 00:12:47.284 Number of I/O Completion Queues: 127 00:12:47.284 00:12:47.284 Active Namespaces 00:12:47.284 ================= 00:12:47.284 Namespace ID:1 00:12:47.284 Error Recovery Timeout: Unlimited 00:12:47.284 Command Set Identifier: NVM (00h) 00:12:47.284 Deallocate: Supported 00:12:47.284 Deallocated/Unwritten Error: Not Supported 00:12:47.284 Deallocated Read Value: Unknown 00:12:47.284 Deallocate in Write Zeroes: Not Supported 00:12:47.284 Deallocated Guard Field: 0xFFFF 00:12:47.284 Flush: Supported 00:12:47.284 Reservation: Supported 00:12:47.284 Namespace Sharing Capabilities: Multiple Controllers 00:12:47.284 Size (in LBAs): 131072 (0GiB) 00:12:47.284 Capacity (in LBAs): 131072 (0GiB) 00:12:47.284 Utilization (in LBAs): 131072 (0GiB) 00:12:47.284 NGUID: D72F6F66392C4438BAC31BFA5F22F929 00:12:47.284 UUID: d72f6f66-392c-4438-bac3-1bfa5f22f929 00:12:47.284 Thin Provisioning: Not Supported 00:12:47.284 Per-NS Atomic Units: Yes 00:12:47.284 Atomic Boundary Size (Normal): 0 00:12:47.284 Atomic Boundary Size (PFail): 0 00:12:47.284 Atomic Boundary Offset: 0 00:12:47.284 Maximum Single Source Range Length: 65535 00:12:47.284 Maximum Copy Length: 65535 00:12:47.284 Maximum Source Range Count: 1 00:12:47.284 NGUID/EUI64 Never Reused: No 00:12:47.284 Namespace Write Protected: No 00:12:47.284 Number of LBA Formats: 1 00:12:47.284 Current LBA Format: LBA Format #00 00:12:47.284 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.284 00:12:47.284 12:58:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:47.284 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.284 [2024-07-15 12:58:09.059255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.572 Initializing NVMe Controllers 00:12:52.572 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:52.572 Initialization complete. Launching workers. 00:12:52.572 ======================================================== 00:12:52.572 Latency(us) 00:12:52.572 Device Information : IOPS MiB/s Average min max 00:12:52.572 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39965.71 156.12 3202.61 829.13 8324.27 00:12:52.572 ======================================================== 00:12:52.572 Total : 39965.71 156.12 3202.61 829.13 8324.27 00:12:52.572 00:12:52.572 [2024-07-15 12:58:14.167421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.572 12:58:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:52.572 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.572 [2024-07-15 12:58:14.346991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.855 Initializing NVMe Controllers 00:12:57.855 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:57.855 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:57.855 Initialization complete. Launching workers. 00:12:57.855 ======================================================== 00:12:57.855 Latency(us) 00:12:57.855 Device Information : IOPS MiB/s Average min max 00:12:57.855 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35628.96 139.18 3591.78 1105.53 6768.55 00:12:57.855 ======================================================== 00:12:57.855 Total : 35628.96 139.18 3591.78 1105.53 6768.55 00:12:57.855 00:12:57.855 [2024-07-15 12:58:19.365296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.855 12:58:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:57.855 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.855 [2024-07-15 12:58:19.559462] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.139 [2024-07-15 12:58:24.702313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.139 Initializing NVMe Controllers 00:13:03.139 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.139 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:03.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:03.139 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:03.139 Initialization complete. Launching workers. 00:13:03.139 Starting thread on core 2 00:13:03.140 Starting thread on core 3 00:13:03.140 Starting thread on core 1 00:13:03.140 12:58:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:03.140 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.400 [2024-07-15 12:58:24.968699] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.705 [2024-07-15 12:58:28.025615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.705 Initializing NVMe Controllers 00:13:06.705 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.705 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.705 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:06.705 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:06.705 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:06.705 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:06.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:06.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:06.705 Initialization complete. Launching workers. 00:13:06.705 Starting thread on core 1 with urgent priority queue 00:13:06.705 Starting thread on core 2 with urgent priority queue 00:13:06.705 Starting thread on core 3 with urgent priority queue 00:13:06.705 Starting thread on core 0 with urgent priority queue 00:13:06.705 SPDK bdev Controller (SPDK2 ) core 0: 15197.67 IO/s 6.58 secs/100000 ios 00:13:06.705 SPDK bdev Controller (SPDK2 ) core 1: 6654.33 IO/s 15.03 secs/100000 ios 00:13:06.705 SPDK bdev Controller (SPDK2 ) core 2: 16255.00 IO/s 6.15 secs/100000 ios 00:13:06.705 SPDK bdev Controller (SPDK2 ) core 3: 6925.67 IO/s 14.44 secs/100000 ios 00:13:06.705 ======================================================== 00:13:06.705 00:13:06.705 12:58:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:06.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.705 [2024-07-15 12:58:28.299673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.705 Initializing NVMe Controllers 00:13:06.705 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.705 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.705 Namespace ID: 1 size: 0GB 00:13:06.705 Initialization complete. 00:13:06.705 INFO: using host memory buffer for IO 00:13:06.705 Hello world! 00:13:06.705 [2024-07-15 12:58:28.309720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.705 12:58:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:06.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.966 [2024-07-15 12:58:28.582492] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.909 Initializing NVMe Controllers 00:13:07.909 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.909 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.909 Initialization complete. Launching workers. 00:13:07.909 submit (in ns) avg, min, max = 7748.7, 3895.8, 4005988.3 00:13:07.909 complete (in ns) avg, min, max = 17338.5, 2375.8, 6990430.8 00:13:07.909 00:13:07.909 Submit histogram 00:13:07.909 ================ 00:13:07.909 Range in us Cumulative Count 00:13:07.909 3.893 - 3.920: 1.4882% ( 290) 00:13:07.909 3.920 - 3.947: 5.9989% ( 879) 00:13:07.909 3.947 - 3.973: 14.6508% ( 1686) 00:13:07.909 3.973 - 4.000: 26.3920% ( 2288) 00:13:07.909 4.000 - 4.027: 38.0921% ( 2280) 00:13:07.909 4.027 - 4.053: 51.4702% ( 2607) 00:13:07.909 4.053 - 4.080: 69.1076% ( 3437) 00:13:07.909 4.080 - 4.107: 83.4043% ( 2786) 00:13:07.909 4.107 - 4.133: 92.2307% ( 1720) 00:13:07.909 4.133 - 4.160: 96.6337% ( 858) 00:13:07.909 4.160 - 4.187: 98.4297% ( 350) 00:13:07.909 4.187 - 4.213: 99.0968% ( 130) 00:13:07.909 4.213 - 4.240: 99.3175% ( 43) 00:13:07.909 4.240 - 4.267: 99.3893% ( 14) 00:13:07.909 4.267 - 4.293: 99.4047% ( 3) 00:13:07.909 4.400 - 4.427: 99.4150% ( 2) 00:13:07.909 4.480 - 4.507: 99.4253% ( 2) 00:13:07.909 4.587 - 4.613: 99.4304% ( 1) 00:13:07.909 4.827 - 4.853: 99.4355% ( 1) 00:13:07.909 4.933 - 4.960: 99.4407% ( 1) 00:13:07.909 5.013 - 5.040: 99.4458% ( 1) 00:13:07.909 5.307 - 5.333: 99.4509% ( 1) 00:13:07.909 5.413 - 5.440: 99.4560% ( 1) 00:13:07.909 5.760 - 5.787: 99.4612% ( 1) 00:13:07.909 5.840 - 5.867: 99.4766% ( 3) 00:13:07.909 5.867 - 5.893: 99.4817% ( 1) 00:13:07.909 5.973 - 6.000: 99.4920% ( 2) 00:13:07.909 6.000 - 6.027: 99.5022% ( 2) 00:13:07.909 6.027 - 6.053: 99.5279% ( 5) 00:13:07.909 6.053 - 6.080: 99.5382% ( 2) 00:13:07.909 6.133 - 6.160: 99.5587% ( 4) 00:13:07.909 6.160 - 6.187: 99.5689% ( 2) 00:13:07.909 6.187 - 6.213: 99.5741% ( 1) 00:13:07.909 6.213 - 6.240: 99.5792% ( 1) 00:13:07.909 6.240 - 6.267: 99.5946% ( 3) 00:13:07.909 6.267 - 6.293: 99.5997% ( 1) 00:13:07.909 6.293 - 6.320: 99.6151% ( 3) 00:13:07.909 6.427 - 6.453: 99.6203% ( 1) 00:13:07.909 6.453 - 6.480: 99.6254% ( 1) 00:13:07.909 6.507 - 6.533: 99.6305% ( 1) 00:13:07.909 6.533 - 6.560: 99.6357% ( 1) 00:13:07.909 6.587 - 6.613: 99.6459% ( 2) 00:13:07.909 6.640 - 6.667: 99.6562% ( 2) 00:13:07.909 6.667 - 6.693: 99.6613% ( 1) 00:13:07.909 6.747 - 6.773: 99.6716% ( 2) 00:13:07.909 6.773 - 6.800: 99.6818% ( 2) 00:13:07.909 6.800 - 6.827: 99.6870% ( 1) 00:13:07.909 6.827 - 6.880: 99.6921% ( 1) 00:13:07.909 6.933 - 6.987: 99.7024% ( 2) 00:13:07.909 6.987 - 7.040: 99.7229% ( 4) 00:13:07.909 7.040 - 7.093: 99.7383% ( 3) 00:13:07.909 7.093 - 7.147: 99.7434% ( 1) 00:13:07.909 7.147 - 7.200: 99.7639% ( 4) 00:13:07.909 7.200 - 7.253: 99.7691% ( 1) 00:13:07.909 7.253 - 7.307: 99.7793% ( 2) 00:13:07.909 7.307 - 7.360: 99.7999% ( 4) 00:13:07.909 7.360 - 7.413: 99.8050% ( 1) 00:13:07.909 7.413 - 7.467: 99.8101% ( 1) 00:13:07.909 7.467 - 7.520: 99.8153% ( 1) 00:13:07.909 7.520 - 7.573: 99.8255% ( 2) 00:13:07.909 7.573 - 7.627: 99.8358% ( 2) 00:13:07.909 7.680 - 7.733: 99.8461% ( 2) 00:13:07.909 7.733 - 7.787: 99.8512% ( 1) 00:13:07.909 7.840 - 7.893: 99.8563% ( 1) 00:13:07.909 7.893 - 7.947: 99.8614% ( 1) 00:13:07.909 7.947 - 8.000: 99.8717% ( 2) 00:13:07.909 8.000 - 8.053: 99.8768% ( 1) 00:13:07.909 8.160 - 8.213: 99.8820% ( 1) 00:13:07.909 8.373 - 8.427: 99.8871% ( 1) 00:13:07.909 8.533 - 8.587: 99.8922% ( 1) 00:13:07.909 8.587 - 8.640: 99.8974% ( 1) 00:13:07.909 8.960 - 9.013: 99.9025% ( 1) 00:13:07.909 10.240 - 10.293: 99.9076% ( 1) 00:13:07.909 3986.773 - 4014.080: 100.0000% ( 18) 00:13:07.909 00:13:07.909 Complete histogram 00:13:07.909 ================== 00:13:07.909 Range in us Cumulative Count 00:13:07.909 2.373 - 2.387: 0.0051% ( 1) 00:13:07.909 2.387 - 2.400: 0.7287% ( 141) 00:13:07.909 2.400 - 2.413: 1.0469% ( 62) 00:13:07.909 2.413 - [2024-07-15 12:58:29.688966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.909 2.427: 1.1444% ( 19) 00:13:07.909 2.427 - 2.440: 1.2521% ( 21) 00:13:07.909 2.440 - 2.453: 48.6837% ( 9243) 00:13:07.910 2.453 - 2.467: 59.2805% ( 2065) 00:13:07.910 2.467 - 2.480: 72.6690% ( 2609) 00:13:07.910 2.480 - 2.493: 79.8019% ( 1390) 00:13:07.910 2.493 - 2.507: 82.1317% ( 454) 00:13:07.910 2.507 - 2.520: 85.4724% ( 651) 00:13:07.910 2.520 - 2.533: 91.2762% ( 1131) 00:13:07.910 2.533 - 2.547: 95.2994% ( 784) 00:13:07.910 2.547 - 2.560: 97.4855% ( 426) 00:13:07.910 2.560 - 2.573: 98.8146% ( 259) 00:13:07.910 2.573 - 2.587: 99.3021% ( 95) 00:13:07.910 2.587 - 2.600: 99.3842% ( 16) 00:13:07.910 2.600 - 2.613: 99.3945% ( 2) 00:13:07.910 4.187 - 4.213: 99.3996% ( 1) 00:13:07.910 4.240 - 4.267: 99.4047% ( 1) 00:13:07.910 4.267 - 4.293: 99.4099% ( 1) 00:13:07.910 4.293 - 4.320: 99.4150% ( 1) 00:13:07.910 4.320 - 4.347: 99.4201% ( 1) 00:13:07.910 4.373 - 4.400: 99.4253% ( 1) 00:13:07.910 4.427 - 4.453: 99.4355% ( 2) 00:13:07.910 4.453 - 4.480: 99.4407% ( 1) 00:13:07.910 4.507 - 4.533: 99.4458% ( 1) 00:13:07.910 4.560 - 4.587: 99.4509% ( 1) 00:13:07.910 4.613 - 4.640: 99.4560% ( 1) 00:13:07.910 4.640 - 4.667: 99.4612% ( 1) 00:13:07.910 5.120 - 5.147: 99.4663% ( 1) 00:13:07.910 5.227 - 5.253: 99.4714% ( 1) 00:13:07.910 5.253 - 5.280: 99.4766% ( 1) 00:13:07.910 5.307 - 5.333: 99.4868% ( 2) 00:13:07.910 5.333 - 5.360: 99.4920% ( 1) 00:13:07.910 5.360 - 5.387: 99.4971% ( 1) 00:13:07.910 5.387 - 5.413: 99.5022% ( 1) 00:13:07.910 5.413 - 5.440: 99.5074% ( 1) 00:13:07.910 5.440 - 5.467: 99.5125% ( 1) 00:13:07.910 5.467 - 5.493: 99.5176% ( 1) 00:13:07.910 5.520 - 5.547: 99.5228% ( 1) 00:13:07.910 5.547 - 5.573: 99.5279% ( 1) 00:13:07.910 5.600 - 5.627: 99.5330% ( 1) 00:13:07.910 5.627 - 5.653: 99.5382% ( 1) 00:13:07.910 5.653 - 5.680: 99.5433% ( 1) 00:13:07.910 5.920 - 5.947: 99.5484% ( 1) 00:13:07.910 6.027 - 6.053: 99.5535% ( 1) 00:13:07.910 6.053 - 6.080: 99.5638% ( 2) 00:13:07.910 6.133 - 6.160: 99.5741% ( 2) 00:13:07.910 6.160 - 6.187: 99.5792% ( 1) 00:13:07.910 6.187 - 6.213: 99.5843% ( 1) 00:13:07.910 6.213 - 6.240: 99.5895% ( 1) 00:13:07.910 6.240 - 6.267: 99.5946% ( 1) 00:13:07.910 6.347 - 6.373: 99.5997% ( 1) 00:13:07.910 6.693 - 6.720: 99.6049% ( 1) 00:13:07.910 7.947 - 8.000: 99.6100% ( 1) 00:13:07.910 11.360 - 11.413: 99.6151% ( 1) 00:13:07.910 11.413 - 11.467: 99.6203% ( 1) 00:13:07.910 11.893 - 11.947: 99.6254% ( 1) 00:13:07.910 1003.520 - 1010.347: 99.6305% ( 1) 00:13:07.910 1037.653 - 1044.480: 99.6357% ( 1) 00:13:07.910 1843.200 - 1856.853: 99.6408% ( 1) 00:13:07.910 1993.387 - 2007.040: 99.6459% ( 1) 00:13:07.910 2075.307 - 2088.960: 99.6510% ( 1) 00:13:07.910 3986.773 - 4014.080: 99.9795% ( 64) 00:13:07.910 5980.160 - 6007.467: 99.9897% ( 2) 00:13:07.910 6963.200 - 6990.507: 100.0000% ( 2) 00:13:07.910 00:13:07.910 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:07.910 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:07.910 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:07.910 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:07.910 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.172 [ 00:13:08.172 { 00:13:08.172 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.172 "subtype": "Discovery", 00:13:08.172 "listen_addresses": [], 00:13:08.172 "allow_any_host": true, 00:13:08.172 "hosts": [] 00:13:08.172 }, 00:13:08.172 { 00:13:08.172 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.172 "subtype": "NVMe", 00:13:08.172 "listen_addresses": [ 00:13:08.172 { 00:13:08.172 "trtype": "VFIOUSER", 00:13:08.172 "adrfam": "IPv4", 00:13:08.172 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.172 "trsvcid": "0" 00:13:08.172 } 00:13:08.172 ], 00:13:08.172 "allow_any_host": true, 00:13:08.172 "hosts": [], 00:13:08.172 "serial_number": "SPDK1", 00:13:08.172 "model_number": "SPDK bdev Controller", 00:13:08.172 "max_namespaces": 32, 00:13:08.172 "min_cntlid": 1, 00:13:08.172 "max_cntlid": 65519, 00:13:08.172 "namespaces": [ 00:13:08.172 { 00:13:08.172 "nsid": 1, 00:13:08.172 "bdev_name": "Malloc1", 00:13:08.172 "name": "Malloc1", 00:13:08.172 "nguid": "F19A04F71CDC4DDD9A57479A95D8674F", 00:13:08.172 "uuid": "f19a04f7-1cdc-4ddd-9a57-479a95d8674f" 00:13:08.172 }, 00:13:08.172 { 00:13:08.172 "nsid": 2, 00:13:08.172 "bdev_name": "Malloc3", 00:13:08.172 "name": "Malloc3", 00:13:08.172 "nguid": "AD8019F22F1547508A93EB7ECC5F1544", 00:13:08.172 "uuid": "ad8019f2-2f15-4750-8a93-eb7ecc5f1544" 00:13:08.172 } 00:13:08.172 ] 00:13:08.172 }, 00:13:08.172 { 00:13:08.172 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.172 "subtype": "NVMe", 00:13:08.172 "listen_addresses": [ 00:13:08.172 { 00:13:08.172 "trtype": "VFIOUSER", 00:13:08.172 "adrfam": "IPv4", 00:13:08.172 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.172 "trsvcid": "0" 00:13:08.172 } 00:13:08.172 ], 00:13:08.172 "allow_any_host": true, 00:13:08.172 "hosts": [], 00:13:08.172 "serial_number": "SPDK2", 00:13:08.172 "model_number": "SPDK bdev Controller", 00:13:08.172 "max_namespaces": 32, 00:13:08.172 "min_cntlid": 1, 00:13:08.172 "max_cntlid": 65519, 00:13:08.172 "namespaces": [ 00:13:08.172 { 00:13:08.172 "nsid": 1, 00:13:08.172 "bdev_name": "Malloc2", 00:13:08.172 "name": "Malloc2", 00:13:08.172 "nguid": "D72F6F66392C4438BAC31BFA5F22F929", 00:13:08.172 "uuid": "d72f6f66-392c-4438-bac3-1bfa5f22f929" 00:13:08.172 } 00:13:08.172 ] 00:13:08.172 } 00:13:08.172 ] 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=596471 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:08.173 12:58:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:08.173 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.434 Malloc4 00:13:08.434 [2024-07-15 12:58:30.080149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.434 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:08.434 [2024-07-15 12:58:30.250290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.696 Asynchronous Event Request test 00:13:08.696 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.696 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.696 Registering asynchronous event callbacks... 00:13:08.696 Starting namespace attribute notice tests for all controllers... 00:13:08.696 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:08.696 aer_cb - Changed Namespace 00:13:08.696 Cleaning up... 00:13:08.696 [ 00:13:08.696 { 00:13:08.696 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.696 "subtype": "Discovery", 00:13:08.696 "listen_addresses": [], 00:13:08.696 "allow_any_host": true, 00:13:08.696 "hosts": [] 00:13:08.696 }, 00:13:08.696 { 00:13:08.696 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.696 "subtype": "NVMe", 00:13:08.696 "listen_addresses": [ 00:13:08.696 { 00:13:08.696 "trtype": "VFIOUSER", 00:13:08.696 "adrfam": "IPv4", 00:13:08.696 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.696 "trsvcid": "0" 00:13:08.696 } 00:13:08.696 ], 00:13:08.696 "allow_any_host": true, 00:13:08.696 "hosts": [], 00:13:08.696 "serial_number": "SPDK1", 00:13:08.696 "model_number": "SPDK bdev Controller", 00:13:08.696 "max_namespaces": 32, 00:13:08.696 "min_cntlid": 1, 00:13:08.696 "max_cntlid": 65519, 00:13:08.696 "namespaces": [ 00:13:08.696 { 00:13:08.696 "nsid": 1, 00:13:08.696 "bdev_name": "Malloc1", 00:13:08.696 "name": "Malloc1", 00:13:08.696 "nguid": "F19A04F71CDC4DDD9A57479A95D8674F", 00:13:08.696 "uuid": "f19a04f7-1cdc-4ddd-9a57-479a95d8674f" 00:13:08.696 }, 00:13:08.696 { 00:13:08.696 "nsid": 2, 00:13:08.696 "bdev_name": "Malloc3", 00:13:08.696 "name": "Malloc3", 00:13:08.696 "nguid": "AD8019F22F1547508A93EB7ECC5F1544", 00:13:08.696 "uuid": "ad8019f2-2f15-4750-8a93-eb7ecc5f1544" 00:13:08.696 } 00:13:08.696 ] 00:13:08.696 }, 00:13:08.696 { 00:13:08.696 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.696 "subtype": "NVMe", 00:13:08.696 "listen_addresses": [ 00:13:08.696 { 00:13:08.696 "trtype": "VFIOUSER", 00:13:08.696 "adrfam": "IPv4", 00:13:08.696 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.696 "trsvcid": "0" 00:13:08.696 } 00:13:08.696 ], 00:13:08.696 "allow_any_host": true, 00:13:08.696 "hosts": [], 00:13:08.696 "serial_number": "SPDK2", 00:13:08.696 "model_number": "SPDK bdev Controller", 00:13:08.696 "max_namespaces": 32, 00:13:08.696 "min_cntlid": 1, 00:13:08.696 "max_cntlid": 65519, 00:13:08.696 "namespaces": [ 00:13:08.696 { 00:13:08.696 "nsid": 1, 00:13:08.696 "bdev_name": "Malloc2", 00:13:08.696 "name": "Malloc2", 00:13:08.696 "nguid": "D72F6F66392C4438BAC31BFA5F22F929", 00:13:08.696 "uuid": "d72f6f66-392c-4438-bac3-1bfa5f22f929" 00:13:08.696 }, 00:13:08.696 { 00:13:08.696 "nsid": 2, 00:13:08.696 "bdev_name": "Malloc4", 00:13:08.696 "name": "Malloc4", 00:13:08.696 "nguid": "9AF2CDB555DF4BB78989639C515E661B", 00:13:08.696 "uuid": "9af2cdb5-55df-4bb7-8989-639c515e661b" 00:13:08.696 } 00:13:08.696 ] 00:13:08.696 } 00:13:08.696 ] 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 596471 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 587383 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 587383 ']' 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 587383 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 587383 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 587383' 00:13:08.696 killing process with pid 587383 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 587383 00:13:08.696 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 587383 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=596496 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 596496' 00:13:08.958 Process pid: 596496 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 596496 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 596496 ']' 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.958 12:58:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:08.958 [2024-07-15 12:58:30.733137] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:08.958 [2024-07-15 12:58:30.734466] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:13:08.958 [2024-07-15 12:58:30.734524] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.958 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.220 [2024-07-15 12:58:30.807841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.220 [2024-07-15 12:58:30.874707] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.220 [2024-07-15 12:58:30.874746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.220 [2024-07-15 12:58:30.874753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.220 [2024-07-15 12:58:30.874759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.220 [2024-07-15 12:58:30.874765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.220 [2024-07-15 12:58:30.874830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.220 [2024-07-15 12:58:30.874947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.220 [2024-07-15 12:58:30.875106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.220 [2024-07-15 12:58:30.875106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.220 [2024-07-15 12:58:30.943711] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:09.220 [2024-07-15 12:58:30.943711] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:09.220 [2024-07-15 12:58:30.944816] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:09.220 [2024-07-15 12:58:30.945176] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:09.220 [2024-07-15 12:58:30.945272] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:09.793 12:58:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.793 12:58:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:09.793 12:58:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:10.736 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:10.998 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:10.998 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:10.998 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.998 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:10.998 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:11.259 Malloc1 00:13:11.259 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:11.259 12:58:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:11.521 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:11.521 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:11.521 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:11.521 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:11.781 Malloc2 00:13:11.781 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:12.041 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:12.041 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:12.301 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:12.301 12:58:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 596496 00:13:12.301 12:58:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 596496 ']' 00:13:12.301 12:58:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 596496 00:13:12.301 12:58:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:12.301 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.301 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 596496 00:13:12.301 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:12.301 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:12.301 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 596496' 00:13:12.302 killing process with pid 596496 00:13:12.302 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 596496 00:13:12.302 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 596496 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:12.563 00:13:12.563 real 0m50.494s 00:13:12.563 user 3m20.156s 00:13:12.563 sys 0m2.977s 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:12.563 ************************************ 00:13:12.563 END TEST nvmf_vfio_user 00:13:12.563 ************************************ 00:13:12.563 12:58:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.563 12:58:34 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:12.563 12:58:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:12.563 12:58:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.563 12:58:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:12.563 ************************************ 00:13:12.563 START TEST nvmf_vfio_user_nvme_compliance 00:13:12.563 ************************************ 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:12.563 * Looking for test storage... 00:13:12.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.563 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.824 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=597353 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 597353' 00:13:12.825 Process pid: 597353 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 597353 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 597353 ']' 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.825 12:58:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:12.825 [2024-07-15 12:58:34.476206] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:13:12.825 [2024-07-15 12:58:34.476285] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.825 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.825 [2024-07-15 12:58:34.548490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:12.825 [2024-07-15 12:58:34.623089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.825 [2024-07-15 12:58:34.623129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.825 [2024-07-15 12:58:34.623136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.825 [2024-07-15 12:58:34.623143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.825 [2024-07-15 12:58:34.623149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.825 [2024-07-15 12:58:34.623273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.825 [2024-07-15 12:58:34.623339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.825 [2024-07-15 12:58:34.623341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.765 12:58:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.765 12:58:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:13.765 12:58:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.704 malloc0 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.704 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.705 12:58:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:14.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.705 00:13:14.705 00:13:14.705 CUnit - A unit testing framework for C - Version 2.1-3 00:13:14.705 http://cunit.sourceforge.net/ 00:13:14.705 00:13:14.705 00:13:14.705 Suite: nvme_compliance 00:13:14.705 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 12:58:36.519669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.705 [2024-07-15 12:58:36.521012] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:14.705 [2024-07-15 12:58:36.521023] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:14.705 [2024-07-15 12:58:36.521027] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:14.705 [2024-07-15 12:58:36.522682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.964 passed 00:13:14.964 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 12:58:36.618283] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.964 [2024-07-15 12:58:36.621298] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.964 passed 00:13:14.964 Test: admin_identify_ns ...[2024-07-15 12:58:36.717509] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.964 [2024-07-15 12:58:36.778241] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:14.964 [2024-07-15 12:58:36.786241] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:15.224 [2024-07-15 12:58:36.807350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.224 passed 00:13:15.224 Test: admin_get_features_mandatory_features ...[2024-07-15 12:58:36.900990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.224 [2024-07-15 12:58:36.904005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.224 passed 00:13:15.224 Test: admin_get_features_optional_features ...[2024-07-15 12:58:36.996559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.224 [2024-07-15 12:58:36.999570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.224 passed 00:13:15.483 Test: admin_set_features_number_of_queues ...[2024-07-15 12:58:37.093772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.483 [2024-07-15 12:58:37.197652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.483 passed 00:13:15.483 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 12:58:37.291319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.483 [2024-07-15 12:58:37.294344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.744 passed 00:13:15.744 Test: admin_get_log_page_with_lpo ...[2024-07-15 12:58:37.386446] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.744 [2024-07-15 12:58:37.455242] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:15.744 [2024-07-15 12:58:37.468286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:15.744 passed 00:13:15.744 Test: fabric_property_get ...[2024-07-15 12:58:37.561916] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:15.744 [2024-07-15 12:58:37.563166] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:15.744 [2024-07-15 12:58:37.564939] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.005 passed 00:13:16.005 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 12:58:37.659463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.006 [2024-07-15 12:58:37.660714] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:16.006 [2024-07-15 12:58:37.662486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.006 passed 00:13:16.006 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 12:58:37.750606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.267 [2024-07-15 12:58:37.833239] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:16.267 [2024-07-15 12:58:37.849239] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:16.267 [2024-07-15 12:58:37.857342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.267 passed 00:13:16.267 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 12:58:37.947965] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.267 [2024-07-15 12:58:37.949201] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:16.267 [2024-07-15 12:58:37.950982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.267 passed 00:13:16.267 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 12:58:38.044114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.528 [2024-07-15 12:58:38.120238] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:16.528 [2024-07-15 12:58:38.144242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:16.528 [2024-07-15 12:58:38.149331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.528 passed 00:13:16.528 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 12:58:38.239942] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.528 [2024-07-15 12:58:38.241185] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:16.528 [2024-07-15 12:58:38.241205] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:16.528 [2024-07-15 12:58:38.242966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.528 passed 00:13:16.528 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 12:58:38.336475] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.789 [2024-07-15 12:58:38.444249] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:16.789 [2024-07-15 12:58:38.452240] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:16.789 [2024-07-15 12:58:38.460237] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:16.789 [2024-07-15 12:58:38.468243] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:16.789 [2024-07-15 12:58:38.500338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.789 passed 00:13:16.789 Test: admin_create_io_sq_verify_pc ...[2024-07-15 12:58:38.590980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.789 [2024-07-15 12:58:38.606248] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:17.050 [2024-07-15 12:58:38.624122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.050 passed 00:13:17.050 Test: admin_create_io_qp_max_qps ...[2024-07-15 12:58:38.717644] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.437 [2024-07-15 12:58:39.821243] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:18.437 [2024-07-15 12:58:40.217100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.437 passed 00:13:18.697 Test: admin_create_io_sq_shared_cq ...[2024-07-15 12:58:40.310527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:18.697 [2024-07-15 12:58:40.440237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:18.697 [2024-07-15 12:58:40.477284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:18.697 passed 00:13:18.697 00:13:18.697 Run Summary: Type Total Ran Passed Failed Inactive 00:13:18.697 suites 1 1 n/a 0 0 00:13:18.697 tests 18 18 18 0 0 00:13:18.697 asserts 360 360 360 0 n/a 00:13:18.697 00:13:18.697 Elapsed time = 1.662 seconds 00:13:18.958 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 597353 00:13:18.958 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 597353 ']' 00:13:18.958 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 597353 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597353 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597353' 00:13:18.959 killing process with pid 597353 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 597353 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 597353 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:18.959 00:13:18.959 real 0m6.444s 00:13:18.959 user 0m18.395s 00:13:18.959 sys 0m0.489s 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.959 12:58:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:18.959 ************************************ 00:13:18.959 END TEST nvmf_vfio_user_nvme_compliance 00:13:18.959 ************************************ 00:13:18.959 12:58:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.959 12:58:40 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:18.959 12:58:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:18.959 12:58:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.959 12:58:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.220 ************************************ 00:13:19.220 START TEST nvmf_vfio_user_fuzz 00:13:19.220 ************************************ 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:19.220 * Looking for test storage... 00:13:19.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.220 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=598638 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 598638' 00:13:19.221 Process pid: 598638 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 598638 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 598638 ']' 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.221 12:58:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:20.163 12:58:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.163 12:58:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:20.164 12:58:41 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.107 malloc0 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:21.107 12:58:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:53.304 Fuzzing completed. Shutting down the fuzz application 00:13:53.304 00:13:53.304 Dumping successful admin opcodes: 00:13:53.304 8, 9, 10, 24, 00:13:53.304 Dumping successful io opcodes: 00:13:53.304 0, 00:13:53.304 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1146303, total successful commands: 4519, random_seed: 242650112 00:13:53.304 NS: 0x200003a1ef00 admin qp, Total commands completed: 144328, total successful commands: 1173, random_seed: 267767680 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 598638 ']' 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 598638' 00:13:53.304 killing process with pid 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 598638 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:53.304 00:13:53.304 real 0m33.686s 00:13:53.304 user 0m37.780s 00:13:53.304 sys 0m26.331s 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.304 12:59:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:53.304 ************************************ 00:13:53.304 END TEST nvmf_vfio_user_fuzz 00:13:53.304 ************************************ 00:13:53.304 12:59:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.304 12:59:14 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:53.304 12:59:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.304 12:59:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.304 12:59:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.304 ************************************ 00:13:53.304 START TEST nvmf_host_management 00:13:53.304 ************************************ 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:53.304 * Looking for test storage... 00:13:53.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.304 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.305 12:59:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:01.442 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:01.442 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:01.442 Found net devices under 0000:31:00.0: cvl_0_0 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:01.442 Found net devices under 0000:31:00.1: cvl_0_1 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:01.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:14:01.442 00:14:01.442 --- 10.0.0.2 ping statistics --- 00:14:01.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.442 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:14:01.442 00:14:01.442 --- 10.0.0.1 ping statistics --- 00:14:01.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.442 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=609435 00:14:01.442 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 609435 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 609435 ']' 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.443 12:59:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:01.443 [2024-07-15 12:59:22.974262] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:01.443 [2024-07-15 12:59:22.974324] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.443 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.443 [2024-07-15 12:59:23.076490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.443 [2024-07-15 12:59:23.173964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.443 [2024-07-15 12:59:23.174031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.443 [2024-07-15 12:59:23.174040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.443 [2024-07-15 12:59:23.174047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.443 [2024-07-15 12:59:23.174054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.443 [2024-07-15 12:59:23.174208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.443 [2024-07-15 12:59:23.174365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.443 [2024-07-15 12:59:23.174680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:01.443 [2024-07-15 12:59:23.174683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.013 [2024-07-15 12:59:23.801740] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.013 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 Malloc0 00:14:02.273 [2024-07-15 12:59:23.861067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=609674 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 609674 /var/tmp/bdevperf.sock 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 609674 ']' 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.273 { 00:14:02.273 "params": { 00:14:02.273 "name": "Nvme$subsystem", 00:14:02.273 "trtype": "$TEST_TRANSPORT", 00:14:02.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.273 "adrfam": "ipv4", 00:14:02.273 "trsvcid": "$NVMF_PORT", 00:14:02.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.273 "hdgst": ${hdgst:-false}, 00:14:02.273 "ddgst": ${ddgst:-false} 00:14:02.273 }, 00:14:02.273 "method": "bdev_nvme_attach_controller" 00:14:02.273 } 00:14:02.273 EOF 00:14:02.273 )") 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:02.273 12:59:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.273 "params": { 00:14:02.273 "name": "Nvme0", 00:14:02.273 "trtype": "tcp", 00:14:02.273 "traddr": "10.0.0.2", 00:14:02.273 "adrfam": "ipv4", 00:14:02.273 "trsvcid": "4420", 00:14:02.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:02.273 "hdgst": false, 00:14:02.273 "ddgst": false 00:14:02.273 }, 00:14:02.273 "method": "bdev_nvme_attach_controller" 00:14:02.273 }' 00:14:02.273 [2024-07-15 12:59:23.960155] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:02.273 [2024-07-15 12:59:23.960203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609674 ] 00:14:02.273 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.273 [2024-07-15 12:59:24.026571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.273 [2024-07-15 12:59:24.091653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.534 Running I/O for 10 seconds... 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=712 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 712 -ge 100 ']' 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.106 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.106 [2024-07-15 12:59:24.821071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.106 [2024-07-15 12:59:24.821446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.821555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46e20 is same with the state(5) to be set 00:14:03.107 [2024-07-15 12:59:24.822069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.107 [2024-07-15 12:59:24.822507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.107 [2024-07-15 12:59:24.822514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.822992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.822999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.823009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.823016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.823026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.823033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.108 [2024-07-15 12:59:24.823042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.108 [2024-07-15 12:59:24.823049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:03.109 [2024-07-15 12:59:24.823185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.109 [2024-07-15 12:59:24.823193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1918850 is same with the state(5) to be set 00:14:03.109 [2024-07-15 12:59:24.823240] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1918850 was disconnected and freed. reset controller. 00:14:03.109 [2024-07-15 12:59:24.824463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:03.109 task offset: 98944 on job bdev=Nvme0n1 fails 00:14:03.109 00:14:03.109 Latency(us) 00:14:03.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:03.109 Job: Nvme0n1 ended in about 0.55 seconds with error 00:14:03.109 Verification LBA range: start 0x0 length 0x400 00:14:03.109 Nvme0n1 : 0.55 1414.65 88.42 117.12 0.00 40773.65 4478.29 34078.72 00:14:03.109 =================================================================================================================== 00:14:03.109 Total : 1414.65 88.42 117.12 0.00 40773.65 4478.29 34078.72 00:14:03.109 [2024-07-15 12:59:24.826715] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.109 [2024-07-15 12:59:24.826739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1507540 (9): Bad file descriptor 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.109 12:59:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:03.109 [2024-07-15 12:59:24.838423] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:04.052 12:59:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 609674 00:14:04.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (609674) - No such process 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:04.053 { 00:14:04.053 "params": { 00:14:04.053 "name": "Nvme$subsystem", 00:14:04.053 "trtype": "$TEST_TRANSPORT", 00:14:04.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:04.053 "adrfam": "ipv4", 00:14:04.053 "trsvcid": "$NVMF_PORT", 00:14:04.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:04.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:04.053 "hdgst": ${hdgst:-false}, 00:14:04.053 "ddgst": ${ddgst:-false} 00:14:04.053 }, 00:14:04.053 "method": "bdev_nvme_attach_controller" 00:14:04.053 } 00:14:04.053 EOF 00:14:04.053 )") 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:04.053 12:59:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:04.053 "params": { 00:14:04.053 "name": "Nvme0", 00:14:04.053 "trtype": "tcp", 00:14:04.053 "traddr": "10.0.0.2", 00:14:04.053 "adrfam": "ipv4", 00:14:04.053 "trsvcid": "4420", 00:14:04.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:04.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:04.053 "hdgst": false, 00:14:04.053 "ddgst": false 00:14:04.053 }, 00:14:04.053 "method": "bdev_nvme_attach_controller" 00:14:04.053 }' 00:14:04.313 [2024-07-15 12:59:25.895268] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:04.313 [2024-07-15 12:59:25.895318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610028 ] 00:14:04.313 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.313 [2024-07-15 12:59:25.960945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.313 [2024-07-15 12:59:26.025034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.574 Running I/O for 1 seconds... 00:14:05.515 00:14:05.515 Latency(us) 00:14:05.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.515 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:05.515 Verification LBA range: start 0x0 length 0x400 00:14:05.515 Nvme0n1 : 1.01 1470.00 91.87 0.00 0.00 42708.22 1460.91 34297.17 00:14:05.515 =================================================================================================================== 00:14:05.515 Total : 1470.00 91.87 0.00 0.00 42708.22 1460.91 34297.17 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.777 rmmod nvme_tcp 00:14:05.777 rmmod nvme_fabrics 00:14:05.777 rmmod nvme_keyring 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 609435 ']' 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 609435 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 609435 ']' 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 609435 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 609435 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 609435' 00:14:05.777 killing process with pid 609435 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 609435 00:14:05.777 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 609435 00:14:05.777 [2024-07-15 12:59:27.589637] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.038 12:59:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.951 12:59:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.951 12:59:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:07.951 00:14:07.951 real 0m15.123s 00:14:07.951 user 0m22.464s 00:14:07.951 sys 0m7.108s 00:14:07.951 12:59:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.951 12:59:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:07.951 ************************************ 00:14:07.952 END TEST nvmf_host_management 00:14:07.952 ************************************ 00:14:07.952 12:59:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.952 12:59:29 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:07.952 12:59:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.952 12:59:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.952 12:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.952 ************************************ 00:14:07.952 START TEST nvmf_lvol 00:14:07.952 ************************************ 00:14:07.952 12:59:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:08.213 * Looking for test storage... 00:14:08.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.213 12:59:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:16.360 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.360 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:16.361 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:16.361 Found net devices under 0000:31:00.0: cvl_0_0 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:16.361 Found net devices under 0000:31:00.1: cvl_0_1 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.361 12:59:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:14:16.361 00:14:16.361 --- 10.0.0.2 ping statistics --- 00:14:16.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.361 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:16.361 00:14:16.361 --- 10.0.0.1 ping statistics --- 00:14:16.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.361 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=615049 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 615049 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 615049 ']' 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.361 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:16.361 [2024-07-15 12:59:38.167779] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:16.361 [2024-07-15 12:59:38.167888] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.622 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.622 [2024-07-15 12:59:38.254341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.622 [2024-07-15 12:59:38.327020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.622 [2024-07-15 12:59:38.327058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.622 [2024-07-15 12:59:38.327066] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.622 [2024-07-15 12:59:38.327073] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.622 [2024-07-15 12:59:38.327078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.622 [2024-07-15 12:59:38.327223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.622 [2024-07-15 12:59:38.327346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.622 [2024-07-15 12:59:38.327520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.194 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.194 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:17.194 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.194 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.194 12:59:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:17.195 12:59:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.195 12:59:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.455 [2024-07-15 12:59:39.127594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.455 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.716 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:17.716 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.716 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:17.716 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:17.977 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:18.238 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0e46edcc-f8df-4117-a6b9-28dd14c8f4ab 00:14:18.238 12:59:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e46edcc-f8df-4117-a6b9-28dd14c8f4ab lvol 20 00:14:18.238 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=831dcbea-bb93-4712-b92b-94ba4e17cf7a 00:14:18.238 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:18.499 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 831dcbea-bb93-4712-b92b-94ba4e17cf7a 00:14:18.761 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:18.761 [2024-07-15 12:59:40.517487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.761 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.021 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=615738 00:14:19.021 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:19.021 12:59:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.965 12:59:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 831dcbea-bb93-4712-b92b-94ba4e17cf7a MY_SNAPSHOT 00:14:20.226 12:59:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0f031bcb-f1fa-4be9-9d8e-f3cd5814fc08 00:14:20.226 12:59:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 831dcbea-bb93-4712-b92b-94ba4e17cf7a 30 00:14:20.488 12:59:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0f031bcb-f1fa-4be9-9d8e-f3cd5814fc08 MY_CLONE 00:14:20.750 12:59:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=14bb378d-5b34-4ce4-9daa-5eac4e78b4ea 00:14:20.750 12:59:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 14bb378d-5b34-4ce4-9daa-5eac4e78b4ea 00:14:21.011 12:59:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 615738 00:14:31.044 Initializing NVMe Controllers 00:14:31.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:31.044 Controller IO queue size 128, less than required. 00:14:31.044 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:31.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:31.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:31.044 Initialization complete. Launching workers. 00:14:31.044 ======================================================== 00:14:31.044 Latency(us) 00:14:31.044 Device Information : IOPS MiB/s Average min max 00:14:31.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 18020.40 70.39 7104.21 1187.89 54740.06 00:14:31.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12320.40 48.13 10393.97 2757.62 39680.27 00:14:31.044 ======================================================== 00:14:31.044 Total : 30340.79 118.52 8440.07 1187.89 54740.06 00:14:31.044 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 831dcbea-bb93-4712-b92b-94ba4e17cf7a 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e46edcc-f8df-4117-a6b9-28dd14c8f4ab 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.044 rmmod nvme_tcp 00:14:31.044 rmmod nvme_fabrics 00:14:31.044 rmmod nvme_keyring 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 615049 ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 615049 ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615049' 00:14:31.044 killing process with pid 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 615049 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.044 12:59:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.487 12:59:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.487 00:14:32.487 real 0m24.138s 00:14:32.487 user 1m3.949s 00:14:32.487 sys 0m8.460s 00:14:32.487 12:59:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.487 12:59:53 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.487 ************************************ 00:14:32.487 END TEST nvmf_lvol 00:14:32.487 ************************************ 00:14:32.487 12:59:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:32.487 12:59:53 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:32.487 12:59:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:32.487 12:59:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.487 12:59:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.487 ************************************ 00:14:32.487 START TEST nvmf_lvs_grow 00:14:32.487 ************************************ 00:14:32.487 12:59:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:32.487 * Looking for test storage... 00:14:32.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.487 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.488 12:59:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:40.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:40.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:40.627 Found net devices under 0000:31:00.0: cvl_0_0 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:40.627 Found net devices under 0000:31:00.1: cvl_0_1 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.627 13:00:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:14:40.627 00:14:40.627 --- 10.0.0.2 ping statistics --- 00:14:40.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.627 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:14:40.627 00:14:40.627 --- 10.0.0.1 ping statistics --- 00:14:40.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.627 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=622556 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 622556 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 622556 ']' 00:14:40.627 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.628 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.628 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.628 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.628 13:00:02 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:40.628 [2024-07-15 13:00:02.400452] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:40.628 [2024-07-15 13:00:02.400516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.628 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.887 [2024-07-15 13:00:02.479407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.887 [2024-07-15 13:00:02.552383] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.887 [2024-07-15 13:00:02.552426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.887 [2024-07-15 13:00:02.552437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.887 [2024-07-15 13:00:02.552445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.887 [2024-07-15 13:00:02.552453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.887 [2024-07-15 13:00:02.552480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.467 13:00:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.728 [2024-07-15 13:00:03.348148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:41.728 ************************************ 00:14:41.728 START TEST lvs_grow_clean 00:14:41.728 ************************************ 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:41.728 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:41.989 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:41.989 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:41.989 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:41.989 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:41.989 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:42.251 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:42.251 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:42.251 13:00:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 410b8d1c-99e1-477d-9886-909b910b2bc4 lvol 150 00:14:42.512 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7d0af103-a3e5-43a0-b919-74399ffe368c 00:14:42.512 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:42.512 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:42.512 [2024-07-15 13:00:04.240326] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:42.512 [2024-07-15 13:00:04.240382] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:42.512 true 00:14:42.512 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:42.512 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:42.773 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:42.773 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.773 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d0af103-a3e5-43a0-b919-74399ffe368c 00:14:43.032 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:43.293 [2024-07-15 13:00:04.866215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.293 13:00:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=623258 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 623258 /var/tmp/bdevperf.sock 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 623258 ']' 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:43.293 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:43.293 [2024-07-15 13:00:05.086588] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:43.293 [2024-07-15 13:00:05.086644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623258 ] 00:14:43.293 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.553 [2024-07-15 13:00:05.169196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.553 [2024-07-15 13:00:05.233737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.126 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:44.126 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:44.126 13:00:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:44.387 Nvme0n1 00:14:44.387 13:00:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:44.646 [ 00:14:44.646 { 00:14:44.646 "name": "Nvme0n1", 00:14:44.646 "aliases": [ 00:14:44.646 "7d0af103-a3e5-43a0-b919-74399ffe368c" 00:14:44.646 ], 00:14:44.646 "product_name": "NVMe disk", 00:14:44.646 "block_size": 4096, 00:14:44.646 "num_blocks": 38912, 00:14:44.646 "uuid": "7d0af103-a3e5-43a0-b919-74399ffe368c", 00:14:44.646 "assigned_rate_limits": { 00:14:44.646 "rw_ios_per_sec": 0, 00:14:44.646 "rw_mbytes_per_sec": 0, 00:14:44.646 "r_mbytes_per_sec": 0, 00:14:44.646 "w_mbytes_per_sec": 0 00:14:44.646 }, 00:14:44.646 "claimed": false, 00:14:44.646 "zoned": false, 00:14:44.646 "supported_io_types": { 00:14:44.646 "read": true, 00:14:44.646 "write": true, 00:14:44.646 "unmap": true, 00:14:44.646 "flush": true, 00:14:44.646 "reset": true, 00:14:44.646 "nvme_admin": true, 00:14:44.646 "nvme_io": true, 00:14:44.646 "nvme_io_md": false, 00:14:44.646 "write_zeroes": true, 00:14:44.646 "zcopy": false, 00:14:44.646 "get_zone_info": false, 00:14:44.646 "zone_management": false, 00:14:44.646 "zone_append": false, 00:14:44.646 "compare": true, 00:14:44.646 "compare_and_write": true, 00:14:44.646 "abort": true, 00:14:44.646 "seek_hole": false, 00:14:44.646 "seek_data": false, 00:14:44.646 "copy": true, 00:14:44.646 "nvme_iov_md": false 00:14:44.646 }, 00:14:44.646 "memory_domains": [ 00:14:44.646 { 00:14:44.646 "dma_device_id": "system", 00:14:44.646 "dma_device_type": 1 00:14:44.646 } 00:14:44.646 ], 00:14:44.646 "driver_specific": { 00:14:44.646 "nvme": [ 00:14:44.646 { 00:14:44.646 "trid": { 00:14:44.646 "trtype": "TCP", 00:14:44.646 "adrfam": "IPv4", 00:14:44.646 "traddr": "10.0.0.2", 00:14:44.646 "trsvcid": "4420", 00:14:44.646 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:44.646 }, 00:14:44.646 "ctrlr_data": { 00:14:44.646 "cntlid": 1, 00:14:44.646 "vendor_id": "0x8086", 00:14:44.646 "model_number": "SPDK bdev Controller", 00:14:44.646 "serial_number": "SPDK0", 00:14:44.646 "firmware_revision": "24.09", 00:14:44.646 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:44.646 "oacs": { 00:14:44.646 "security": 0, 00:14:44.646 "format": 0, 00:14:44.646 "firmware": 0, 00:14:44.646 "ns_manage": 0 00:14:44.646 }, 00:14:44.646 "multi_ctrlr": true, 00:14:44.646 "ana_reporting": false 00:14:44.646 }, 00:14:44.646 "vs": { 00:14:44.646 "nvme_version": "1.3" 00:14:44.646 }, 00:14:44.646 "ns_data": { 00:14:44.646 "id": 1, 00:14:44.646 "can_share": true 00:14:44.646 } 00:14:44.646 } 00:14:44.646 ], 00:14:44.646 "mp_policy": "active_passive" 00:14:44.646 } 00:14:44.646 } 00:14:44.646 ] 00:14:44.646 13:00:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=623433 00:14:44.646 13:00:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:44.646 13:00:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:44.646 Running I/O for 10 seconds... 00:14:46.028 Latency(us) 00:14:46.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.028 Nvme0n1 : 1.00 18166.00 70.96 0.00 0.00 0.00 0.00 0.00 00:14:46.028 =================================================================================================================== 00:14:46.028 Total : 18166.00 70.96 0.00 0.00 0.00 0.00 0.00 00:14:46.028 00:14:46.598 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:46.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.858 Nvme0n1 : 2.00 18232.00 71.22 0.00 0.00 0.00 0.00 0.00 00:14:46.858 =================================================================================================================== 00:14:46.858 Total : 18232.00 71.22 0.00 0.00 0.00 0.00 0.00 00:14:46.858 00:14:46.858 true 00:14:46.858 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:46.858 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:47.119 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:47.119 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:47.119 13:00:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 623433 00:14:47.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.689 Nvme0n1 : 3.00 18253.67 71.30 0.00 0.00 0.00 0.00 0.00 00:14:47.689 =================================================================================================================== 00:14:47.689 Total : 18253.67 71.30 0.00 0.00 0.00 0.00 0.00 00:14:47.689 00:14:48.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.630 Nvme0n1 : 4.00 18279.25 71.40 0.00 0.00 0.00 0.00 0.00 00:14:48.630 =================================================================================================================== 00:14:48.630 Total : 18279.25 71.40 0.00 0.00 0.00 0.00 0.00 00:14:48.630 00:14:50.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.012 Nvme0n1 : 5.00 18308.60 71.52 0.00 0.00 0.00 0.00 0.00 00:14:50.012 =================================================================================================================== 00:14:50.012 Total : 18308.60 71.52 0.00 0.00 0.00 0.00 0.00 00:14:50.012 00:14:50.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.952 Nvme0n1 : 6.00 18328.67 71.60 0.00 0.00 0.00 0.00 0.00 00:14:50.952 =================================================================================================================== 00:14:50.952 Total : 18328.67 71.60 0.00 0.00 0.00 0.00 0.00 00:14:50.952 00:14:51.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.895 Nvme0n1 : 7.00 18343.43 71.65 0.00 0.00 0.00 0.00 0.00 00:14:51.895 =================================================================================================================== 00:14:51.895 Total : 18343.43 71.65 0.00 0.00 0.00 0.00 0.00 00:14:51.895 00:14:52.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.835 Nvme0n1 : 8.00 18361.62 71.73 0.00 0.00 0.00 0.00 0.00 00:14:52.835 =================================================================================================================== 00:14:52.835 Total : 18361.62 71.73 0.00 0.00 0.00 0.00 0.00 00:14:52.835 00:14:53.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.776 Nvme0n1 : 9.00 18369.22 71.75 0.00 0.00 0.00 0.00 0.00 00:14:53.776 =================================================================================================================== 00:14:53.776 Total : 18369.22 71.75 0.00 0.00 0.00 0.00 0.00 00:14:53.776 00:14:54.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.715 Nvme0n1 : 10.00 18375.40 71.78 0.00 0.00 0.00 0.00 0.00 00:14:54.715 =================================================================================================================== 00:14:54.715 Total : 18375.40 71.78 0.00 0.00 0.00 0.00 0.00 00:14:54.715 00:14:54.715 00:14:54.715 Latency(us) 00:14:54.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.715 Nvme0n1 : 10.00 18383.21 71.81 0.00 0.00 6960.38 3085.65 12615.68 00:14:54.715 =================================================================================================================== 00:14:54.715 Total : 18383.21 71.81 0.00 0.00 6960.38 3085.65 12615.68 00:14:54.715 0 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 623258 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 623258 ']' 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 623258 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 623258 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 623258' 00:14:54.715 killing process with pid 623258 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 623258 00:14:54.715 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.715 00:14:54.715 Latency(us) 00:14:54.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.715 =================================================================================================================== 00:14:54.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.715 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 623258 00:14:54.975 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.235 13:00:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.235 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:55.235 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:55.494 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:55.494 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:55.494 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:55.755 [2024-07-15 13:00:17.322413] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:55.755 request: 00:14:55.755 { 00:14:55.755 "uuid": "410b8d1c-99e1-477d-9886-909b910b2bc4", 00:14:55.755 "method": "bdev_lvol_get_lvstores", 00:14:55.755 "req_id": 1 00:14:55.755 } 00:14:55.755 Got JSON-RPC error response 00:14:55.755 response: 00:14:55.755 { 00:14:55.755 "code": -19, 00:14:55.755 "message": "No such device" 00:14:55.755 } 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.755 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.015 aio_bdev 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7d0af103-a3e5-43a0-b919-74399ffe368c 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7d0af103-a3e5-43a0-b919-74399ffe368c 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:56.015 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7d0af103-a3e5-43a0-b919-74399ffe368c -t 2000 00:14:56.276 [ 00:14:56.276 { 00:14:56.276 "name": "7d0af103-a3e5-43a0-b919-74399ffe368c", 00:14:56.276 "aliases": [ 00:14:56.276 "lvs/lvol" 00:14:56.276 ], 00:14:56.276 "product_name": "Logical Volume", 00:14:56.276 "block_size": 4096, 00:14:56.276 "num_blocks": 38912, 00:14:56.276 "uuid": "7d0af103-a3e5-43a0-b919-74399ffe368c", 00:14:56.276 "assigned_rate_limits": { 00:14:56.276 "rw_ios_per_sec": 0, 00:14:56.276 "rw_mbytes_per_sec": 0, 00:14:56.276 "r_mbytes_per_sec": 0, 00:14:56.276 "w_mbytes_per_sec": 0 00:14:56.276 }, 00:14:56.276 "claimed": false, 00:14:56.276 "zoned": false, 00:14:56.276 "supported_io_types": { 00:14:56.276 "read": true, 00:14:56.276 "write": true, 00:14:56.276 "unmap": true, 00:14:56.276 "flush": false, 00:14:56.276 "reset": true, 00:14:56.276 "nvme_admin": false, 00:14:56.276 "nvme_io": false, 00:14:56.276 "nvme_io_md": false, 00:14:56.276 "write_zeroes": true, 00:14:56.276 "zcopy": false, 00:14:56.276 "get_zone_info": false, 00:14:56.276 "zone_management": false, 00:14:56.276 "zone_append": false, 00:14:56.276 "compare": false, 00:14:56.276 "compare_and_write": false, 00:14:56.276 "abort": false, 00:14:56.276 "seek_hole": true, 00:14:56.276 "seek_data": true, 00:14:56.276 "copy": false, 00:14:56.276 "nvme_iov_md": false 00:14:56.276 }, 00:14:56.276 "driver_specific": { 00:14:56.276 "lvol": { 00:14:56.276 "lvol_store_uuid": "410b8d1c-99e1-477d-9886-909b910b2bc4", 00:14:56.276 "base_bdev": "aio_bdev", 00:14:56.276 "thin_provision": false, 00:14:56.276 "num_allocated_clusters": 38, 00:14:56.276 "snapshot": false, 00:14:56.276 "clone": false, 00:14:56.276 "esnap_clone": false 00:14:56.276 } 00:14:56.276 } 00:14:56.276 } 00:14:56.276 ] 00:14:56.276 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:56.276 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:56.276 13:00:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:56.536 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:56.536 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:56.536 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:56.536 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:56.536 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d0af103-a3e5-43a0-b919-74399ffe368c 00:14:56.797 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 410b8d1c-99e1-477d-9886-909b910b2bc4 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.065 00:14:57.065 real 0m15.402s 00:14:57.065 user 0m15.117s 00:14:57.065 sys 0m1.272s 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:57.065 ************************************ 00:14:57.065 END TEST lvs_grow_clean 00:14:57.065 ************************************ 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.065 13:00:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:57.332 ************************************ 00:14:57.332 START TEST lvs_grow_dirty 00:14:57.332 ************************************ 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.332 13:00:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.332 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:57.332 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:57.592 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f lvol 150 00:14:57.851 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aca7cc97-7694-4865-9461-94691d612a8e 00:14:57.851 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:57.851 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:58.111 [2024-07-15 13:00:19.687238] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:58.111 [2024-07-15 13:00:19.687298] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:58.111 true 00:14:58.111 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:14:58.111 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:58.111 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:58.111 13:00:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:58.372 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aca7cc97-7694-4865-9461-94691d612a8e 00:14:58.372 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:58.632 [2024-07-15 13:00:20.297054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=626761 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 626761 /var/tmp/bdevperf.sock 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 626761 ']' 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.632 13:00:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:58.893 [2024-07-15 13:00:20.499719] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:14:58.893 [2024-07-15 13:00:20.499769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626761 ] 00:14:58.893 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.893 [2024-07-15 13:00:20.580573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.893 [2024-07-15 13:00:20.634753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.462 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.462 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:59.462 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:00.032 Nvme0n1 00:15:00.032 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:00.032 [ 00:15:00.032 { 00:15:00.032 "name": "Nvme0n1", 00:15:00.032 "aliases": [ 00:15:00.032 "aca7cc97-7694-4865-9461-94691d612a8e" 00:15:00.032 ], 00:15:00.032 "product_name": "NVMe disk", 00:15:00.032 "block_size": 4096, 00:15:00.032 "num_blocks": 38912, 00:15:00.032 "uuid": "aca7cc97-7694-4865-9461-94691d612a8e", 00:15:00.032 "assigned_rate_limits": { 00:15:00.032 "rw_ios_per_sec": 0, 00:15:00.032 "rw_mbytes_per_sec": 0, 00:15:00.032 "r_mbytes_per_sec": 0, 00:15:00.032 "w_mbytes_per_sec": 0 00:15:00.032 }, 00:15:00.032 "claimed": false, 00:15:00.032 "zoned": false, 00:15:00.032 "supported_io_types": { 00:15:00.032 "read": true, 00:15:00.032 "write": true, 00:15:00.032 "unmap": true, 00:15:00.032 "flush": true, 00:15:00.032 "reset": true, 00:15:00.032 "nvme_admin": true, 00:15:00.032 "nvme_io": true, 00:15:00.032 "nvme_io_md": false, 00:15:00.032 "write_zeroes": true, 00:15:00.032 "zcopy": false, 00:15:00.032 "get_zone_info": false, 00:15:00.032 "zone_management": false, 00:15:00.032 "zone_append": false, 00:15:00.032 "compare": true, 00:15:00.032 "compare_and_write": true, 00:15:00.032 "abort": true, 00:15:00.032 "seek_hole": false, 00:15:00.032 "seek_data": false, 00:15:00.032 "copy": true, 00:15:00.032 "nvme_iov_md": false 00:15:00.032 }, 00:15:00.032 "memory_domains": [ 00:15:00.032 { 00:15:00.032 "dma_device_id": "system", 00:15:00.032 "dma_device_type": 1 00:15:00.032 } 00:15:00.032 ], 00:15:00.032 "driver_specific": { 00:15:00.032 "nvme": [ 00:15:00.032 { 00:15:00.032 "trid": { 00:15:00.032 "trtype": "TCP", 00:15:00.032 "adrfam": "IPv4", 00:15:00.032 "traddr": "10.0.0.2", 00:15:00.032 "trsvcid": "4420", 00:15:00.032 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:00.032 }, 00:15:00.032 "ctrlr_data": { 00:15:00.032 "cntlid": 1, 00:15:00.032 "vendor_id": "0x8086", 00:15:00.032 "model_number": "SPDK bdev Controller", 00:15:00.033 "serial_number": "SPDK0", 00:15:00.033 "firmware_revision": "24.09", 00:15:00.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.033 "oacs": { 00:15:00.033 "security": 0, 00:15:00.033 "format": 0, 00:15:00.033 "firmware": 0, 00:15:00.033 "ns_manage": 0 00:15:00.033 }, 00:15:00.033 "multi_ctrlr": true, 00:15:00.033 "ana_reporting": false 00:15:00.033 }, 00:15:00.033 "vs": { 00:15:00.033 "nvme_version": "1.3" 00:15:00.033 }, 00:15:00.033 "ns_data": { 00:15:00.033 "id": 1, 00:15:00.033 "can_share": true 00:15:00.033 } 00:15:00.033 } 00:15:00.033 ], 00:15:00.033 "mp_policy": "active_passive" 00:15:00.033 } 00:15:00.033 } 00:15:00.033 ] 00:15:00.033 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=626945 00:15:00.033 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:00.033 13:00:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.293 Running I/O for 10 seconds... 00:15:01.235 Latency(us) 00:15:01.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.235 Nvme0n1 : 1.00 17559.00 68.59 0.00 0.00 0.00 0.00 0.00 00:15:01.235 =================================================================================================================== 00:15:01.235 Total : 17559.00 68.59 0.00 0.00 0.00 0.00 0.00 00:15:01.235 00:15:02.179 13:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:02.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.179 Nvme0n1 : 2.00 17691.50 69.11 0.00 0.00 0.00 0.00 0.00 00:15:02.179 =================================================================================================================== 00:15:02.179 Total : 17691.50 69.11 0.00 0.00 0.00 0.00 0.00 00:15:02.179 00:15:02.179 true 00:15:02.179 13:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:02.179 13:00:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:02.446 13:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:02.446 13:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:02.446 13:00:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 626945 00:15:03.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.425 Nvme0n1 : 3.00 17743.67 69.31 0.00 0.00 0.00 0.00 0.00 00:15:03.425 =================================================================================================================== 00:15:03.425 Total : 17743.67 69.31 0.00 0.00 0.00 0.00 0.00 00:15:03.425 00:15:04.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.375 Nvme0n1 : 4.00 17779.75 69.45 0.00 0.00 0.00 0.00 0.00 00:15:04.375 =================================================================================================================== 00:15:04.375 Total : 17779.75 69.45 0.00 0.00 0.00 0.00 0.00 00:15:04.375 00:15:05.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.324 Nvme0n1 : 5.00 17809.40 69.57 0.00 0.00 0.00 0.00 0.00 00:15:05.324 =================================================================================================================== 00:15:05.324 Total : 17809.40 69.57 0.00 0.00 0.00 0.00 0.00 00:15:05.324 00:15:06.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.273 Nvme0n1 : 6.00 17831.83 69.66 0.00 0.00 0.00 0.00 0.00 00:15:06.273 =================================================================================================================== 00:15:06.273 Total : 17831.83 69.66 0.00 0.00 0.00 0.00 0.00 00:15:06.273 00:15:07.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.224 Nvme0n1 : 7.00 17852.43 69.74 0.00 0.00 0.00 0.00 0.00 00:15:07.224 =================================================================================================================== 00:15:07.224 Total : 17852.43 69.74 0.00 0.00 0.00 0.00 0.00 00:15:07.224 00:15:08.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.173 Nvme0n1 : 8.00 17867.88 69.80 0.00 0.00 0.00 0.00 0.00 00:15:08.173 =================================================================================================================== 00:15:08.173 Total : 17867.88 69.80 0.00 0.00 0.00 0.00 0.00 00:15:08.173 00:15:09.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.116 Nvme0n1 : 9.00 17879.00 69.84 0.00 0.00 0.00 0.00 0.00 00:15:09.116 =================================================================================================================== 00:15:09.116 Total : 17879.00 69.84 0.00 0.00 0.00 0.00 0.00 00:15:09.116 00:15:10.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.500 Nvme0n1 : 10.00 17890.30 69.88 0.00 0.00 0.00 0.00 0.00 00:15:10.500 =================================================================================================================== 00:15:10.500 Total : 17890.30 69.88 0.00 0.00 0.00 0.00 0.00 00:15:10.500 00:15:10.500 00:15:10.500 Latency(us) 00:15:10.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.500 Nvme0n1 : 10.01 17890.47 69.88 0.00 0.00 7150.39 5843.63 17803.95 00:15:10.500 =================================================================================================================== 00:15:10.500 Total : 17890.47 69.88 0.00 0.00 7150.39 5843.63 17803.95 00:15:10.500 0 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 626761 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 626761 ']' 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 626761 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.500 13:00:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 626761 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 626761' 00:15:10.500 killing process with pid 626761 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 626761 00:15:10.500 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.500 00:15:10.500 Latency(us) 00:15:10.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.500 =================================================================================================================== 00:15:10.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 626761 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.500 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:10.764 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:10.764 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:11.026 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:11.026 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:11.026 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 622556 00:15:11.026 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 622556 00:15:11.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 622556 Killed "${NVMF_APP[@]}" "$@" 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=629156 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 629156 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 629156 ']' 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.027 13:00:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.027 [2024-07-15 13:00:32.774995] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:11.027 [2024-07-15 13:00:32.775074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.027 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.027 [2024-07-15 13:00:32.849795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.288 [2024-07-15 13:00:32.913661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.288 [2024-07-15 13:00:32.913698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.288 [2024-07-15 13:00:32.913710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.288 [2024-07-15 13:00:32.913718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.288 [2024-07-15 13:00:32.913726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.288 [2024-07-15 13:00:32.913754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.863 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.863 [2024-07-15 13:00:33.686701] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:11.863 [2024-07-15 13:00:33.686810] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:11.863 [2024-07-15 13:00:33.686848] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aca7cc97-7694-4865-9461-94691d612a8e 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aca7cc97-7694-4865-9461-94691d612a8e 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:12.123 13:00:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aca7cc97-7694-4865-9461-94691d612a8e -t 2000 00:15:12.383 [ 00:15:12.383 { 00:15:12.383 "name": "aca7cc97-7694-4865-9461-94691d612a8e", 00:15:12.383 "aliases": [ 00:15:12.383 "lvs/lvol" 00:15:12.383 ], 00:15:12.383 "product_name": "Logical Volume", 00:15:12.383 "block_size": 4096, 00:15:12.383 "num_blocks": 38912, 00:15:12.383 "uuid": "aca7cc97-7694-4865-9461-94691d612a8e", 00:15:12.383 "assigned_rate_limits": { 00:15:12.383 "rw_ios_per_sec": 0, 00:15:12.383 "rw_mbytes_per_sec": 0, 00:15:12.383 "r_mbytes_per_sec": 0, 00:15:12.383 "w_mbytes_per_sec": 0 00:15:12.383 }, 00:15:12.383 "claimed": false, 00:15:12.383 "zoned": false, 00:15:12.383 "supported_io_types": { 00:15:12.383 "read": true, 00:15:12.383 "write": true, 00:15:12.383 "unmap": true, 00:15:12.383 "flush": false, 00:15:12.383 "reset": true, 00:15:12.383 "nvme_admin": false, 00:15:12.383 "nvme_io": false, 00:15:12.383 "nvme_io_md": false, 00:15:12.383 "write_zeroes": true, 00:15:12.383 "zcopy": false, 00:15:12.383 "get_zone_info": false, 00:15:12.383 "zone_management": false, 00:15:12.383 "zone_append": false, 00:15:12.383 "compare": false, 00:15:12.383 "compare_and_write": false, 00:15:12.383 "abort": false, 00:15:12.383 "seek_hole": true, 00:15:12.383 "seek_data": true, 00:15:12.383 "copy": false, 00:15:12.383 "nvme_iov_md": false 00:15:12.383 }, 00:15:12.383 "driver_specific": { 00:15:12.383 "lvol": { 00:15:12.383 "lvol_store_uuid": "0a9fa6bb-00ba-462b-b088-1ce21ac8e62f", 00:15:12.383 "base_bdev": "aio_bdev", 00:15:12.383 "thin_provision": false, 00:15:12.383 "num_allocated_clusters": 38, 00:15:12.383 "snapshot": false, 00:15:12.383 "clone": false, 00:15:12.383 "esnap_clone": false 00:15:12.383 } 00:15:12.383 } 00:15:12.383 } 00:15:12.383 ] 00:15:12.383 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:12.383 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:12.383 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:12.383 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:12.383 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:12.384 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:12.643 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:12.643 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.643 [2024-07-15 13:00:34.466617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:12.903 request: 00:15:12.903 { 00:15:12.903 "uuid": "0a9fa6bb-00ba-462b-b088-1ce21ac8e62f", 00:15:12.903 "method": "bdev_lvol_get_lvstores", 00:15:12.903 "req_id": 1 00:15:12.903 } 00:15:12.903 Got JSON-RPC error response 00:15:12.903 response: 00:15:12.903 { 00:15:12.903 "code": -19, 00:15:12.903 "message": "No such device" 00:15:12.903 } 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:12.903 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.164 aio_bdev 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aca7cc97-7694-4865-9461-94691d612a8e 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=aca7cc97-7694-4865-9461-94691d612a8e 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:13.164 13:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aca7cc97-7694-4865-9461-94691d612a8e -t 2000 00:15:13.425 [ 00:15:13.425 { 00:15:13.425 "name": "aca7cc97-7694-4865-9461-94691d612a8e", 00:15:13.425 "aliases": [ 00:15:13.425 "lvs/lvol" 00:15:13.425 ], 00:15:13.425 "product_name": "Logical Volume", 00:15:13.425 "block_size": 4096, 00:15:13.425 "num_blocks": 38912, 00:15:13.425 "uuid": "aca7cc97-7694-4865-9461-94691d612a8e", 00:15:13.425 "assigned_rate_limits": { 00:15:13.425 "rw_ios_per_sec": 0, 00:15:13.425 "rw_mbytes_per_sec": 0, 00:15:13.425 "r_mbytes_per_sec": 0, 00:15:13.425 "w_mbytes_per_sec": 0 00:15:13.425 }, 00:15:13.425 "claimed": false, 00:15:13.425 "zoned": false, 00:15:13.425 "supported_io_types": { 00:15:13.425 "read": true, 00:15:13.425 "write": true, 00:15:13.425 "unmap": true, 00:15:13.425 "flush": false, 00:15:13.425 "reset": true, 00:15:13.425 "nvme_admin": false, 00:15:13.425 "nvme_io": false, 00:15:13.425 "nvme_io_md": false, 00:15:13.425 "write_zeroes": true, 00:15:13.425 "zcopy": false, 00:15:13.425 "get_zone_info": false, 00:15:13.425 "zone_management": false, 00:15:13.425 "zone_append": false, 00:15:13.425 "compare": false, 00:15:13.425 "compare_and_write": false, 00:15:13.425 "abort": false, 00:15:13.425 "seek_hole": true, 00:15:13.425 "seek_data": true, 00:15:13.425 "copy": false, 00:15:13.425 "nvme_iov_md": false 00:15:13.425 }, 00:15:13.425 "driver_specific": { 00:15:13.425 "lvol": { 00:15:13.425 "lvol_store_uuid": "0a9fa6bb-00ba-462b-b088-1ce21ac8e62f", 00:15:13.425 "base_bdev": "aio_bdev", 00:15:13.425 "thin_provision": false, 00:15:13.425 "num_allocated_clusters": 38, 00:15:13.425 "snapshot": false, 00:15:13.425 "clone": false, 00:15:13.425 "esnap_clone": false 00:15:13.425 } 00:15:13.425 } 00:15:13.425 } 00:15:13.425 ] 00:15:13.425 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:13.426 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:13.426 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:13.687 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:13.687 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:13.687 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:13.687 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:13.687 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aca7cc97-7694-4865-9461-94691d612a8e 00:15:13.948 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a9fa6bb-00ba-462b-b088-1ce21ac8e62f 00:15:13.948 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:14.209 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:14.209 00:15:14.209 real 0m17.063s 00:15:14.209 user 0m44.643s 00:15:14.209 sys 0m2.956s 00:15:14.209 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:14.210 ************************************ 00:15:14.210 END TEST lvs_grow_dirty 00:15:14.210 ************************************ 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:14.210 13:00:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:14.210 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:14.210 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:14.210 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:14.210 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:14.210 nvmf_trace.0 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.470 rmmod nvme_tcp 00:15:14.470 rmmod nvme_fabrics 00:15:14.470 rmmod nvme_keyring 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 629156 ']' 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 629156 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 629156 ']' 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 629156 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 629156 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 629156' 00:15:14.470 killing process with pid 629156 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 629156 00:15:14.470 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 629156 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.730 13:00:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.643 13:00:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.643 00:15:16.643 real 0m44.386s 00:15:16.643 user 1m5.887s 00:15:16.643 sys 0m10.811s 00:15:16.643 13:00:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.643 13:00:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:16.643 ************************************ 00:15:16.643 END TEST nvmf_lvs_grow 00:15:16.643 ************************************ 00:15:16.643 13:00:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:16.643 13:00:38 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.643 13:00:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:16.643 13:00:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.643 13:00:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.643 ************************************ 00:15:16.643 START TEST nvmf_bdev_io_wait 00:15:16.643 ************************************ 00:15:16.643 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:16.904 * Looking for test storage... 00:15:16.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.904 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.905 13:00:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:25.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:25.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:25.052 Found net devices under 0000:31:00.0: cvl_0_0 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:25.052 Found net devices under 0000:31:00.1: cvl_0_1 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.052 13:00:45 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:15:25.052 00:15:25.052 --- 10.0.0.2 ping statistics --- 00:15:25.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.052 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:15:25.052 00:15:25.052 --- 10.0.0.1 ping statistics --- 00:15:25.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.052 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.052 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=634430 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 634430 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 634430 ']' 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.053 13:00:46 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.053 [2024-07-15 13:00:46.379217] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:25.053 [2024-07-15 13:00:46.379294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.053 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.053 [2024-07-15 13:00:46.456951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.053 [2024-07-15 13:00:46.531066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.053 [2024-07-15 13:00:46.531107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.053 [2024-07-15 13:00:46.531115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.053 [2024-07-15 13:00:46.531121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.053 [2024-07-15 13:00:46.531126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.053 [2024-07-15 13:00:46.531251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.053 [2024-07-15 13:00:46.531368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.053 [2024-07-15 13:00:46.531610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.053 [2024-07-15 13:00:46.531611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.624 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.624 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 [2024-07-15 13:00:47.259818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 Malloc0 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:25.625 [2024-07-15 13:00:47.327494] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=634612 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=634614 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.625 { 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme$subsystem", 00:15:25.625 "trtype": "$TEST_TRANSPORT", 00:15:25.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "$NVMF_PORT", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.625 "hdgst": ${hdgst:-false}, 00:15:25.625 "ddgst": ${ddgst:-false} 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 } 00:15:25.625 EOF 00:15:25.625 )") 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=634616 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.625 { 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme$subsystem", 00:15:25.625 "trtype": "$TEST_TRANSPORT", 00:15:25.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "$NVMF_PORT", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.625 "hdgst": ${hdgst:-false}, 00:15:25.625 "ddgst": ${ddgst:-false} 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 } 00:15:25.625 EOF 00:15:25.625 )") 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=634619 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.625 { 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme$subsystem", 00:15:25.625 "trtype": "$TEST_TRANSPORT", 00:15:25.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "$NVMF_PORT", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.625 "hdgst": ${hdgst:-false}, 00:15:25.625 "ddgst": ${ddgst:-false} 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 } 00:15:25.625 EOF 00:15:25.625 )") 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:25.625 { 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme$subsystem", 00:15:25.625 "trtype": "$TEST_TRANSPORT", 00:15:25.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "$NVMF_PORT", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:25.625 "hdgst": ${hdgst:-false}, 00:15:25.625 "ddgst": ${ddgst:-false} 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 } 00:15:25.625 EOF 00:15:25.625 )") 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 634612 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme1", 00:15:25.625 "trtype": "tcp", 00:15:25.625 "traddr": "10.0.0.2", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "4420", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.625 "hdgst": false, 00:15:25.625 "ddgst": false 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 }' 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme1", 00:15:25.625 "trtype": "tcp", 00:15:25.625 "traddr": "10.0.0.2", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "4420", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.625 "hdgst": false, 00:15:25.625 "ddgst": false 00:15:25.625 }, 00:15:25.625 "method": "bdev_nvme_attach_controller" 00:15:25.625 }' 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.625 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.625 "params": { 00:15:25.625 "name": "Nvme1", 00:15:25.625 "trtype": "tcp", 00:15:25.625 "traddr": "10.0.0.2", 00:15:25.625 "adrfam": "ipv4", 00:15:25.625 "trsvcid": "4420", 00:15:25.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.625 "hdgst": false, 00:15:25.625 "ddgst": false 00:15:25.625 }, 00:15:25.626 "method": "bdev_nvme_attach_controller" 00:15:25.626 }' 00:15:25.626 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:25.626 13:00:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:25.626 "params": { 00:15:25.626 "name": "Nvme1", 00:15:25.626 "trtype": "tcp", 00:15:25.626 "traddr": "10.0.0.2", 00:15:25.626 "adrfam": "ipv4", 00:15:25.626 "trsvcid": "4420", 00:15:25.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.626 "hdgst": false, 00:15:25.626 "ddgst": false 00:15:25.626 }, 00:15:25.626 "method": "bdev_nvme_attach_controller" 00:15:25.626 }' 00:15:25.626 [2024-07-15 13:00:47.379580] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:25.626 [2024-07-15 13:00:47.379633] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:25.626 [2024-07-15 13:00:47.381657] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:25.626 [2024-07-15 13:00:47.381709] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:25.626 [2024-07-15 13:00:47.385342] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:25.626 [2024-07-15 13:00:47.385388] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:25.626 [2024-07-15 13:00:47.387928] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:25.626 [2024-07-15 13:00:47.387972] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:25.626 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.886 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.886 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.886 [2024-07-15 13:00:47.533764] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.886 [2024-07-15 13:00:47.575452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.886 [2024-07-15 13:00:47.585966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:25.886 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.886 [2024-07-15 13:00:47.624957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:25.886 [2024-07-15 13:00:47.636986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.886 [2024-07-15 13:00:47.686093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.886 [2024-07-15 13:00:47.687844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:26.146 [2024-07-15 13:00:47.736430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:26.146 Running I/O for 1 seconds... 00:15:26.146 Running I/O for 1 seconds... 00:15:26.146 Running I/O for 1 seconds... 00:15:26.146 Running I/O for 1 seconds... 00:15:27.088 00:15:27.088 Latency(us) 00:15:27.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.088 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:27.088 Nvme1n1 : 1.01 12997.18 50.77 0.00 0.00 9813.14 6608.21 17694.72 00:15:27.088 =================================================================================================================== 00:15:27.088 Total : 12997.18 50.77 0.00 0.00 9813.14 6608.21 17694.72 00:15:27.088 00:15:27.088 Latency(us) 00:15:27.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.088 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:27.088 Nvme1n1 : 1.00 187395.10 732.01 0.00 0.00 680.38 269.65 781.65 00:15:27.088 =================================================================================================================== 00:15:27.088 Total : 187395.10 732.01 0.00 0.00 680.38 269.65 781.65 00:15:27.088 00:15:27.088 Latency(us) 00:15:27.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.088 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:27.088 Nvme1n1 : 1.00 19003.32 74.23 0.00 0.00 6718.68 3454.29 17039.36 00:15:27.088 =================================================================================================================== 00:15:27.088 Total : 19003.32 74.23 0.00 0.00 6718.68 3454.29 17039.36 00:15:27.349 00:15:27.349 Latency(us) 00:15:27.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.349 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:27.349 Nvme1n1 : 1.01 11461.34 44.77 0.00 0.00 11132.90 5188.27 24576.00 00:15:27.349 =================================================================================================================== 00:15:27.349 Total : 11461.34 44.77 0.00 0.00 11132.90 5188.27 24576.00 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 634614 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 634616 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 634619 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.349 rmmod nvme_tcp 00:15:27.349 rmmod nvme_fabrics 00:15:27.349 rmmod nvme_keyring 00:15:27.349 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 634430 ']' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 634430 ']' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634430' 00:15:27.609 killing process with pid 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 634430 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.609 13:00:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.151 13:00:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:30.151 00:15:30.151 real 0m12.982s 00:15:30.151 user 0m18.725s 00:15:30.151 sys 0m7.201s 00:15:30.151 13:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.151 13:00:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:30.151 ************************************ 00:15:30.151 END TEST nvmf_bdev_io_wait 00:15:30.151 ************************************ 00:15:30.151 13:00:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:30.151 13:00:51 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.151 13:00:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:30.151 13:00:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.151 13:00:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.151 ************************************ 00:15:30.151 START TEST nvmf_queue_depth 00:15:30.151 ************************************ 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:30.151 * Looking for test storage... 00:15:30.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.151 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.152 13:00:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.293 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:38.294 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:38.294 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:38.294 Found net devices under 0000:31:00.0: cvl_0_0 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:38.294 Found net devices under 0000:31:00.1: cvl_0_1 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:15:38.294 00:15:38.294 --- 10.0.0.2 ping statistics --- 00:15:38.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.294 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:15:38.294 00:15:38.294 --- 10.0.0.1 ping statistics --- 00:15:38.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.294 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=639646 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 639646 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 639646 ']' 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.294 13:00:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.294 [2024-07-15 13:00:59.777166] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:38.294 [2024-07-15 13:00:59.777239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.294 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.294 [2024-07-15 13:00:59.873054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.294 [2024-07-15 13:00:59.965396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.294 [2024-07-15 13:00:59.965463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.294 [2024-07-15 13:00:59.965476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.294 [2024-07-15 13:00:59.965485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.294 [2024-07-15 13:00:59.965493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.294 [2024-07-15 13:00:59.965525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 [2024-07-15 13:01:00.624950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 Malloc0 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.867 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:39.128 [2024-07-15 13:01:00.698043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=639906 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 639906 /var/tmp/bdevperf.sock 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 639906 ']' 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.128 13:01:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:39.128 [2024-07-15 13:01:00.754915] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:15:39.128 [2024-07-15 13:01:00.754977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639906 ] 00:15:39.128 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.128 [2024-07-15 13:01:00.825529] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.128 [2024-07-15 13:01:00.899690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:40.069 NVMe0n1 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.069 13:01:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:40.069 Running I/O for 10 seconds... 00:15:50.171 00:15:50.171 Latency(us) 00:15:50.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.171 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:50.171 Verification LBA range: start 0x0 length 0x4000 00:15:50.171 NVMe0n1 : 10.05 11432.03 44.66 0.00 0.00 89219.12 8410.45 74274.13 00:15:50.171 =================================================================================================================== 00:15:50.171 Total : 11432.03 44.66 0.00 0.00 89219.12 8410.45 74274.13 00:15:50.171 0 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 639906 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 639906 ']' 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 639906 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.171 13:01:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639906 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639906' 00:15:50.431 killing process with pid 639906 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 639906 00:15:50.431 Received shutdown signal, test time was about 10.000000 seconds 00:15:50.431 00:15:50.431 Latency(us) 00:15:50.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.431 =================================================================================================================== 00:15:50.431 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 639906 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.431 rmmod nvme_tcp 00:15:50.431 rmmod nvme_fabrics 00:15:50.431 rmmod nvme_keyring 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 639646 ']' 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 639646 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 639646 ']' 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 639646 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.431 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639646 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639646' 00:15:50.691 killing process with pid 639646 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 639646 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 639646 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.691 13:01:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.237 13:01:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.237 00:15:53.237 real 0m22.940s 00:15:53.237 user 0m26.013s 00:15:53.237 sys 0m7.156s 00:15:53.237 13:01:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:53.237 13:01:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:53.237 ************************************ 00:15:53.237 END TEST nvmf_queue_depth 00:15:53.237 ************************************ 00:15:53.237 13:01:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:53.237 13:01:14 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.237 13:01:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:53.237 13:01:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:53.237 13:01:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.237 ************************************ 00:15:53.237 START TEST nvmf_target_multipath 00:15:53.237 ************************************ 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.237 * Looking for test storage... 00:15:53.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.237 13:01:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.238 13:01:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:01.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:01.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:01.376 Found net devices under 0000:31:00.0: cvl_0_0 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:01.376 Found net devices under 0000:31:00.1: cvl_0_1 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.376 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:16:01.377 00:16:01.377 --- 10.0.0.2 ping statistics --- 00:16:01.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.377 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:16:01.377 00:16:01.377 --- 10.0.0.1 ping statistics --- 00:16:01.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.377 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:01.377 only one NIC for nvmf test 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.377 rmmod nvme_tcp 00:16:01.377 rmmod nvme_fabrics 00:16:01.377 rmmod nvme_keyring 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.377 13:01:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:03.285 00:16:03.285 real 0m10.416s 00:16:03.285 user 0m2.299s 00:16:03.285 sys 0m6.016s 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.285 13:01:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:03.285 ************************************ 00:16:03.285 END TEST nvmf_target_multipath 00:16:03.285 ************************************ 00:16:03.285 13:01:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:03.285 13:01:24 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:03.285 13:01:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:03.285 13:01:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.285 13:01:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.285 ************************************ 00:16:03.285 START TEST nvmf_zcopy 00:16:03.285 ************************************ 00:16:03.285 13:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:03.545 * Looking for test storage... 00:16:03.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.545 13:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.546 13:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.546 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.546 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.546 13:01:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.546 13:01:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.679 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:11.680 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:11.680 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:11.680 Found net devices under 0000:31:00.0: cvl_0_0 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:11.680 Found net devices under 0000:31:00.1: cvl_0_1 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.680 13:01:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:16:11.680 00:16:11.680 --- 10.0.0.2 ping statistics --- 00:16:11.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.680 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:16:11.680 00:16:11.680 --- 10.0.0.1 ping statistics --- 00:16:11.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.680 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=651365 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 651365 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 651365 ']' 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.680 13:01:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:11.680 [2024-07-15 13:01:33.292473] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:16:11.680 [2024-07-15 13:01:33.292543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.680 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.680 [2024-07-15 13:01:33.389365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.680 [2024-07-15 13:01:33.481197] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.680 [2024-07-15 13:01:33.481266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.680 [2024-07-15 13:01:33.481278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.680 [2024-07-15 13:01:33.481287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.680 [2024-07-15 13:01:33.481295] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.680 [2024-07-15 13:01:33.481326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.252 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.252 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:12.252 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.252 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.252 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 [2024-07-15 13:01:34.120520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 [2024-07-15 13:01:34.144715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 malloc0 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:12.512 { 00:16:12.512 "params": { 00:16:12.512 "name": "Nvme$subsystem", 00:16:12.512 "trtype": "$TEST_TRANSPORT", 00:16:12.512 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:12.512 "adrfam": "ipv4", 00:16:12.512 "trsvcid": "$NVMF_PORT", 00:16:12.512 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:12.512 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:12.512 "hdgst": ${hdgst:-false}, 00:16:12.512 "ddgst": ${ddgst:-false} 00:16:12.512 }, 00:16:12.512 "method": "bdev_nvme_attach_controller" 00:16:12.512 } 00:16:12.512 EOF 00:16:12.512 )") 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:12.512 13:01:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:12.512 "params": { 00:16:12.512 "name": "Nvme1", 00:16:12.512 "trtype": "tcp", 00:16:12.512 "traddr": "10.0.0.2", 00:16:12.512 "adrfam": "ipv4", 00:16:12.512 "trsvcid": "4420", 00:16:12.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:12.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:12.512 "hdgst": false, 00:16:12.512 "ddgst": false 00:16:12.512 }, 00:16:12.512 "method": "bdev_nvme_attach_controller" 00:16:12.512 }' 00:16:12.512 [2024-07-15 13:01:34.251784] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:16:12.512 [2024-07-15 13:01:34.251870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651710 ] 00:16:12.512 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.512 [2024-07-15 13:01:34.322767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.772 [2024-07-15 13:01:34.396310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.772 Running I/O for 10 seconds... 00:16:25.010 00:16:25.010 Latency(us) 00:16:25.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.010 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:25.010 Verification LBA range: start 0x0 length 0x1000 00:16:25.010 Nvme1n1 : 10.01 9095.53 71.06 0.00 0.00 14020.52 1870.51 28835.84 00:16:25.010 =================================================================================================================== 00:16:25.010 Total : 9095.53 71.06 0.00 0.00 14020.52 1870.51 28835.84 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=653712 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:25.010 { 00:16:25.010 "params": { 00:16:25.010 "name": "Nvme$subsystem", 00:16:25.010 "trtype": "$TEST_TRANSPORT", 00:16:25.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.010 "adrfam": "ipv4", 00:16:25.010 "trsvcid": "$NVMF_PORT", 00:16:25.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.010 "hdgst": ${hdgst:-false}, 00:16:25.010 "ddgst": ${ddgst:-false} 00:16:25.010 }, 00:16:25.010 "method": "bdev_nvme_attach_controller" 00:16:25.010 } 00:16:25.010 EOF 00:16:25.010 )") 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:25.010 [2024-07-15 13:01:44.758568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.758597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:25.010 13:01:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:25.010 "params": { 00:16:25.010 "name": "Nvme1", 00:16:25.010 "trtype": "tcp", 00:16:25.010 "traddr": "10.0.0.2", 00:16:25.010 "adrfam": "ipv4", 00:16:25.010 "trsvcid": "4420", 00:16:25.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.010 "hdgst": false, 00:16:25.010 "ddgst": false 00:16:25.010 }, 00:16:25.010 "method": "bdev_nvme_attach_controller" 00:16:25.010 }' 00:16:25.010 [2024-07-15 13:01:44.770577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.770590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 [2024-07-15 13:01:44.782599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.782608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 [2024-07-15 13:01:44.794629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.794637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 [2024-07-15 13:01:44.798248] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:16:25.010 [2024-07-15 13:01:44.798295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653712 ] 00:16:25.010 [2024-07-15 13:01:44.806666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.806681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 [2024-07-15 13:01:44.818691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.010 [2024-07-15 13:01:44.818700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.010 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.010 [2024-07-15 13:01:44.830721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.830729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.842751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.842759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.854783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.854792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.862494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.011 [2024-07-15 13:01:44.866815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.866824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.878841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.878851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.890871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.890881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.902903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.902915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.914934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.914943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.926523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.011 [2024-07-15 13:01:44.926965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.926974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.938996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.939006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.951031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.951044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.963059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.963069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.975090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.975099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.987119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.987127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:44.999150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:44.999157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.011193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.011209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.023217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.023235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.035249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.035259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.047282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.047290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.059312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.059320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.071344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.071354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.083375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.083386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.095406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.095417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.107442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.107456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 Running I/O for 5 seconds... 00:16:25.011 [2024-07-15 13:01:45.119469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.119477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.134411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.134427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.147515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.147531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.160771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.160787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.173838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.173854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.186546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.186562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.199913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.199929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.213328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.213344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.226871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.226887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.240150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.240166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.253276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.253292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.266391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.266407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.279874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.279890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.292992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.293008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.305586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.305602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.317997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.318014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.331425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.331441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.344762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.344778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.358002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.358017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.371273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.371288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.384489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.384504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.397799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.397814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.410523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.410539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.423645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.423661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.436483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.436499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.449849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.449866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.463224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.463245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.476645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.476662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.489838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.489855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.503263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.503279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.516554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.516570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.530070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.530086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.011 [2024-07-15 13:01:45.543367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.011 [2024-07-15 13:01:45.543383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.556676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.556691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.569821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.569837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.583170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.583185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.596115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.596131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.608319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.608335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.620948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.620963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.634101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.634117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.647120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.647136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.659981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.660001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.673168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.673184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.686341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.686357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.699625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.699642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.712650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.712666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.726248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.726264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.738706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.738722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.751985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.752002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.765626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.765642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.778907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.778924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.792398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.792415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.805731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.805747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.819261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.819277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.832028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.832044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.845499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.845516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.858730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.858746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.871849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.871865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.885185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.885201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.898698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.898714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.911848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.911868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.925101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.925116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.938288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.938303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.951451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.951467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.964503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.964518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.977457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.977472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:45.989579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:45.989594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.003140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.003156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.016246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.016261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.029275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.029290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.042664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.042681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.056008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.056024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.068928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.068944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.081492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.081508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.094479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.094495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.107657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.107673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.121329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.121345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.134486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.134502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.148054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.148070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.160581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.160600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.173809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.173825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.186840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.186856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.200263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.200279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.213426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.213442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.225867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.225882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.238847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.238863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.251771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.251788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.265147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.265163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.278606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.278622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.291772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.291787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.305350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.305365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.012 [2024-07-15 13:01:46.319006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.012 [2024-07-15 13:01:46.319021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.332142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.332158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.345143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.345158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.358056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.358072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.371367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.371382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.384419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.384434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.397815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.397830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.410087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.410107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.423476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.423492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.436245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.436261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.449268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.449283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.461901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.461916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.474487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.474503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.487721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.487736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.500948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.500964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.514127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.514142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.527589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.527604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.541095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.541110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.554496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.554511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.567995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.568010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.580857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.580872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.594271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.594286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.607346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.607362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.620345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.620361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.633650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.633665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.647029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.647044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.659460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.659475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.672433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.672448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.685920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.685935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.699169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.699184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.712505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.712520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.725169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.725184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.738253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.738268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.751254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.751270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.763866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.763882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.776857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.776873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.790058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.790074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.803128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.803144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.816484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.816500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.013 [2024-07-15 13:01:46.829840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.013 [2024-07-15 13:01:46.829855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.273 [2024-07-15 13:01:46.843263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.273 [2024-07-15 13:01:46.843279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.273 [2024-07-15 13:01:46.856360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.273 [2024-07-15 13:01:46.856375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.273 [2024-07-15 13:01:46.869966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.273 [2024-07-15 13:01:46.869981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.883265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.883281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.896667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.896682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.908874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.908890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.922264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.922280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.935730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.935746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.948733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.948749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.961815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.961830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.974773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.974788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:46.988086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:46.988101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.001432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.001448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.014707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.014722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.028037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.028052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.041283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.041299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.054586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.054602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.068136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.068151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.080571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.080588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.274 [2024-07-15 13:01:47.093937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.274 [2024-07-15 13:01:47.093953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.534 [2024-07-15 13:01:47.107304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.534 [2024-07-15 13:01:47.107319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.534 [2024-07-15 13:01:47.120624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.120640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.133369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.133385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.146641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.146656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.159345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.159361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.172013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.172029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.184982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.184998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.198524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.198541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.211850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.211866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.225153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.225168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.238067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.238082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.251029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.251044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.263899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.263915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.276882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.276897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.290337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.290353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.303775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.303791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.317216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.317237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.330279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.330296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.343055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.343071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.535 [2024-07-15 13:01:47.356020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.535 [2024-07-15 13:01:47.356035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.368795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.368811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.382253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.382269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.395868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.395883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.408828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.408844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.422285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.422302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.435592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.435607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.448726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.448742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.461746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.461761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.474412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.474428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.487511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.487527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.500261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.500276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.513621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.513637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.526483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.526500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.539825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.539840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.553011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.553026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.565706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.565722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.578572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.578587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.591830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.591845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.605080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.605096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.795 [2024-07-15 13:01:47.618826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.795 [2024-07-15 13:01:47.618841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.632394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.632410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.645285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.645304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.657505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.657521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.671310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.671326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.683998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.684013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.697240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.697256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.710632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.710647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.723517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.723533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.736980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.736995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.750398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.750413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.763192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.763207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.776213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.776234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.789536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.789552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.802708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.802727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.815318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.815334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.828448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.828464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.841604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.841620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.854928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.854944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.055 [2024-07-15 13:01:47.868044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.055 [2024-07-15 13:01:47.868059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.881636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.881652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.894387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.894406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.907837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.907853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.920932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.920948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.933811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.933826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.947243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.947258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.960263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.960279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.973049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.973064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.986031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.986046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:47.998710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:47.998726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.011767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.011782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.024976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.024991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.037362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.037377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.049943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.049958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.063390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.063406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.075897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.075913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.088889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.088905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.101712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.101728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.113821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.113837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.126728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.126744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.315 [2024-07-15 13:01:48.139903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.315 [2024-07-15 13:01:48.139923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.153470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.153486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.166362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.166377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.179348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.179364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.192471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.192487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.204951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.204966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.218188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.218204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.231262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.231278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.574 [2024-07-15 13:01:48.244626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.574 [2024-07-15 13:01:48.244642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.257918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.257933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.271343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.271359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.283769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.283787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.296805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.296820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.309511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.309526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.322805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.322820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.336668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.336683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.349274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.349289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.362648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.362663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.375889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.375904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.575 [2024-07-15 13:01:48.388697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.575 [2024-07-15 13:01:48.388715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.401027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.401042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.414650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.414665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.428151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.428166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.441506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.441521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.454707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.454722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.466871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.466887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.479957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.479972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.492639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.492655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.505955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.505971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.519325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.519341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.532478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.532493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.545468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.545483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.558807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.558822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.572054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.572069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.585547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.585562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.598845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.598860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.611117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.611132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.624130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.624145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.636670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.636685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.834 [2024-07-15 13:01:48.648977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.834 [2024-07-15 13:01:48.648992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.661221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.661241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.674533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.674548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.688175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.688191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.700795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.700811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.714107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.714123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.727006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.727022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.739635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.739651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.752796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.752811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.765395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.765410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.778267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.778283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.791344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.791360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.804180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.804197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.817263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.817279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.830257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.830272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.843600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.843616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.857121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.857137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.869622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.869639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.882922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.882938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.896352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.896368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.094 [2024-07-15 13:01:48.909612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.094 [2024-07-15 13:01:48.909628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.922425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.922441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.935475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.935491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.948632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.948648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.961849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.961865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.975062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.975077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:48.988063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:48.988079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.001416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.001432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.014018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.014033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.026534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.026549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.039433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.039449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.052583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.052598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.065175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.065191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.078241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.078257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.091223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.091243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.104882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.104898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.118367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.118383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.131029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.131045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.144555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.144571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.158201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.158216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.354 [2024-07-15 13:01:49.170759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.354 [2024-07-15 13:01:49.170774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.183351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.183367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.196815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.196831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.210475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.210492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.224000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.224016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.237683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.237698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.250409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.250424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.262653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.262669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.275644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.275660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.288894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.288910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.301929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.615 [2024-07-15 13:01:49.301945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.615 [2024-07-15 13:01:49.314714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.314730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.327056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.327072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.340244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.340260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.353549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.353566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.366503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.366519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.379306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.379322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.392380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.392395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.405642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.405658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.418611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.418627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.616 [2024-07-15 13:01:49.432197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.616 [2024-07-15 13:01:49.432214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.876 [2024-07-15 13:01:49.445686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 13:01:49.445702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.876 [2024-07-15 13:01:49.458514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 13:01:49.458530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.876 [2024-07-15 13:01:49.471662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 13:01:49.471678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.876 [2024-07-15 13:01:49.484869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.876 [2024-07-15 13:01:49.484885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.498248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.498264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.511823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.511838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.525417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.525433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.538721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.538737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.551714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.551729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.565290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.565305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.578585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.578600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.591760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.591775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.604390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.604406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.616939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.616959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.630328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.630343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.643363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.643378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.656630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.656645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.669695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.669710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.682424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.682439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.877 [2024-07-15 13:01:49.695737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.877 [2024-07-15 13:01:49.695753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.709280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.709296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.722399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.722414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.735427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.735443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.748524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.748539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.762026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.762041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.775010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.775025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.787667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.787682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.801147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.801162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.814635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.814651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.827919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.827934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.840645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.840661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.853597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.853613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.866015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.866034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.879256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.879271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.892031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.892047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.138 [2024-07-15 13:01:49.905051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.138 [2024-07-15 13:01:49.905067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.139 [2024-07-15 13:01:49.918105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.139 [2024-07-15 13:01:49.918121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.139 [2024-07-15 13:01:49.930793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.139 [2024-07-15 13:01:49.930808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.139 [2024-07-15 13:01:49.943931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.139 [2024-07-15 13:01:49.943946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.139 [2024-07-15 13:01:49.957213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.139 [2024-07-15 13:01:49.957233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:49.969932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:49.969948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:49.983189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:49.983204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:49.995927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:49.995943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.008826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.008842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.021575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.021592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.035122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.035138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.047972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.047988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.060958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.060973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.073946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.073961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.087112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.087128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.100206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.100221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.113415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.113435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.126894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.126911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 00:16:28.399 Latency(us) 00:16:28.399 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.399 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:28.399 Nvme1n1 : 5.00 19517.59 152.48 0.00 0.00 6551.98 2416.64 16602.45 00:16:28.399 =================================================================================================================== 00:16:28.399 Total : 19517.59 152.48 0.00 0.00 6551.98 2416.64 16602.45 00:16:28.399 [2024-07-15 13:01:50.136561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.136576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.148589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.148602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.160627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.160639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.172658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.172671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.184699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.184710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.196715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.196725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.208742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.208751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.399 [2024-07-15 13:01:50.220774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.399 [2024-07-15 13:01:50.220785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.660 [2024-07-15 13:01:50.232804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.660 [2024-07-15 13:01:50.232815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.660 [2024-07-15 13:01:50.244834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.660 [2024-07-15 13:01:50.244844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.660 [2024-07-15 13:01:50.256864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.660 [2024-07-15 13:01:50.256873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.660 [2024-07-15 13:01:50.268893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.660 [2024-07-15 13:01:50.268901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (653712) - No such process 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 653712 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.660 delay0 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.660 13:01:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:28.660 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.660 [2024-07-15 13:01:50.410608] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:35.258 Initializing NVMe Controllers 00:16:35.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:35.258 Initialization complete. Launching workers. 00:16:35.258 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2283 00:16:35.258 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2565, failed to submit 38 00:16:35.258 success 2402, unsuccess 163, failed 0 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.258 rmmod nvme_tcp 00:16:35.258 rmmod nvme_fabrics 00:16:35.258 rmmod nvme_keyring 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 651365 ']' 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 651365 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 651365 ']' 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 651365 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 651365 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 651365' 00:16:35.258 killing process with pid 651365 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 651365 00:16:35.258 13:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 651365 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.258 13:01:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.854 13:01:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.854 00:16:37.854 real 0m34.078s 00:16:37.854 user 0m45.130s 00:16:37.854 sys 0m10.812s 00:16:37.854 13:01:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.854 13:01:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:37.854 ************************************ 00:16:37.854 END TEST nvmf_zcopy 00:16:37.854 ************************************ 00:16:37.854 13:01:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:37.854 13:01:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.854 13:01:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:37.854 13:01:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.854 13:01:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.854 ************************************ 00:16:37.854 START TEST nvmf_nmic 00:16:37.854 ************************************ 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.855 * Looking for test storage... 00:16:37.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.855 13:01:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:45.998 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:45.998 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:45.998 Found net devices under 0000:31:00.0: cvl_0_0 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:45.998 Found net devices under 0000:31:00.1: cvl_0_1 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:45.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:16:45.998 00:16:45.998 --- 10.0.0.2 ping statistics --- 00:16:45.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.998 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:45.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:16:45.998 00:16:45.998 --- 10.0.0.1 ping statistics --- 00:16:45.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.998 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=660727 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 660727 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 660727 ']' 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.998 13:02:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:45.998 [2024-07-15 13:02:07.537571] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:16:45.998 [2024-07-15 13:02:07.537634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.998 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.998 [2024-07-15 13:02:07.617047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:45.998 [2024-07-15 13:02:07.692136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.998 [2024-07-15 13:02:07.692175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.998 [2024-07-15 13:02:07.692183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.998 [2024-07-15 13:02:07.692190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.998 [2024-07-15 13:02:07.692195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.998 [2024-07-15 13:02:07.692279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.998 [2024-07-15 13:02:07.692340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.998 [2024-07-15 13:02:07.692641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.998 [2024-07-15 13:02:07.692642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.567 [2024-07-15 13:02:08.368872] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.567 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 Malloc0 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 [2024-07-15 13:02:08.428370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:46.828 test case1: single bdev can't be used in multiple subsystems 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 [2024-07-15 13:02:08.464339] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:46.828 [2024-07-15 13:02:08.464357] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:46.828 [2024-07-15 13:02:08.464364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:46.828 request: 00:16:46.828 { 00:16:46.828 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:46.828 "namespace": { 00:16:46.828 "bdev_name": "Malloc0", 00:16:46.828 "no_auto_visible": false 00:16:46.828 }, 00:16:46.828 "method": "nvmf_subsystem_add_ns", 00:16:46.828 "req_id": 1 00:16:46.828 } 00:16:46.828 Got JSON-RPC error response 00:16:46.828 response: 00:16:46.828 { 00:16:46.828 "code": -32602, 00:16:46.828 "message": "Invalid parameters" 00:16:46.828 } 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:46.828 Adding namespace failed - expected result. 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:46.828 test case2: host connect to nvmf target in multiple paths 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 [2024-07-15 13:02:08.476461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.828 13:02:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.212 13:02:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:50.124 13:02:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.124 13:02:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:50.124 13:02:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.124 13:02:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:50.124 13:02:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:52.041 13:02:13 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:52.041 [global] 00:16:52.041 thread=1 00:16:52.041 invalidate=1 00:16:52.041 rw=write 00:16:52.041 time_based=1 00:16:52.041 runtime=1 00:16:52.041 ioengine=libaio 00:16:52.041 direct=1 00:16:52.041 bs=4096 00:16:52.041 iodepth=1 00:16:52.041 norandommap=0 00:16:52.041 numjobs=1 00:16:52.041 00:16:52.041 verify_dump=1 00:16:52.041 verify_backlog=512 00:16:52.041 verify_state_save=0 00:16:52.041 do_verify=1 00:16:52.041 verify=crc32c-intel 00:16:52.041 [job0] 00:16:52.041 filename=/dev/nvme0n1 00:16:52.041 Could not set queue depth (nvme0n1) 00:16:52.301 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.301 fio-3.35 00:16:52.301 Starting 1 thread 00:16:53.244 00:16:53.244 job0: (groupid=0, jobs=1): err= 0: pid=662140: Mon Jul 15 13:02:15 2024 00:16:53.244 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:53.244 slat (nsec): min=7194, max=58405, avg=25175.71, stdev=3074.61 00:16:53.244 clat (usec): min=821, max=1227, avg=1066.20, stdev=76.53 00:16:53.244 lat (usec): min=847, max=1251, avg=1091.37, stdev=76.36 00:16:53.244 clat percentiles (usec): 00:16:53.244 | 1.00th=[ 848], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 996], 00:16:53.244 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1106], 00:16:53.244 | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:16:53.244 | 99.00th=[ 1221], 99.50th=[ 1221], 99.90th=[ 1221], 99.95th=[ 1221], 00:16:53.244 | 99.99th=[ 1221] 00:16:53.244 write: IOPS=791, BW=3165KiB/s (3241kB/s)(3168KiB/1001msec); 0 zone resets 00:16:53.244 slat (nsec): min=9147, max=65044, avg=26269.47, stdev=10146.19 00:16:53.244 clat (usec): min=227, max=3329, avg=518.97, stdev=148.76 00:16:53.244 lat (usec): min=242, max=3359, avg=545.24, stdev=151.62 00:16:53.244 clat percentiles (usec): 00:16:53.244 | 1.00th=[ 277], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 408], 00:16:53.244 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 498], 60.00th=[ 510], 00:16:53.244 | 70.00th=[ 537], 80.00th=[ 619], 90.00th=[ 685], 95.00th=[ 734], 00:16:53.244 | 99.00th=[ 799], 99.50th=[ 824], 99.90th=[ 3326], 99.95th=[ 3326], 00:16:53.244 | 99.99th=[ 3326] 00:16:53.244 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:53.244 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:53.244 lat (usec) : 250=0.08%, 500=32.06%, 750=26.61%, 1000=10.05% 00:16:53.244 lat (msec) : 2=31.13%, 4=0.08% 00:16:53.244 cpu : usr=1.70%, sys=3.60%, ctx=1304, majf=0, minf=1 00:16:53.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.244 issued rwts: total=512,792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:53.244 00:16:53.244 Run status group 0 (all jobs): 00:16:53.244 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:53.244 WRITE: bw=3165KiB/s (3241kB/s), 3165KiB/s-3165KiB/s (3241kB/s-3241kB/s), io=3168KiB (3244kB), run=1001-1001msec 00:16:53.244 00:16:53.244 Disk stats (read/write): 00:16:53.244 nvme0n1: ios=562/642, merge=0/0, ticks=570/309, in_queue=879, util=92.99% 00:16:53.244 13:02:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:53.505 13:02:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.505 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.506 rmmod nvme_tcp 00:16:53.506 rmmod nvme_fabrics 00:16:53.506 rmmod nvme_keyring 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 660727 ']' 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 660727 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 660727 ']' 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 660727 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660727 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660727' 00:16:53.506 killing process with pid 660727 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 660727 00:16:53.506 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 660727 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.766 13:02:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.313 13:02:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.313 00:16:56.313 real 0m18.338s 00:16:56.313 user 0m48.623s 00:16:56.313 sys 0m6.866s 00:16:56.313 13:02:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.313 13:02:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 ************************************ 00:16:56.313 END TEST nvmf_nmic 00:16:56.313 ************************************ 00:16:56.313 13:02:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.313 13:02:17 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:56.313 13:02:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.313 13:02:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.313 13:02:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.313 ************************************ 00:16:56.313 START TEST nvmf_fio_target 00:16:56.313 ************************************ 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:56.313 * Looking for test storage... 00:16:56.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.313 13:02:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.314 13:02:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:04.464 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:04.465 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:04.465 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:04.465 Found net devices under 0000:31:00.0: cvl_0_0 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:04.465 Found net devices under 0000:31:00.1: cvl_0_1 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:17:04.465 00:17:04.465 --- 10.0.0.2 ping statistics --- 00:17:04.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.465 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:17:04.465 00:17:04.465 --- 10.0.0.1 ping statistics --- 00:17:04.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.465 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=666977 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 666977 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 666977 ']' 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.465 13:02:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.465 [2024-07-15 13:02:25.831856] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:17:04.465 [2024-07-15 13:02:25.831922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.465 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.465 [2024-07-15 13:02:25.911812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.465 [2024-07-15 13:02:25.986245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.465 [2024-07-15 13:02:25.986285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.465 [2024-07-15 13:02:25.986293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.465 [2024-07-15 13:02:25.986303] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.465 [2024-07-15 13:02:25.986308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.465 [2024-07-15 13:02:25.986391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.465 [2024-07-15 13:02:25.986497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.465 [2024-07-15 13:02:25.986652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.465 [2024-07-15 13:02:25.986653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:05.038 [2024-07-15 13:02:26.796260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.038 13:02:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.298 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:05.298 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.559 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:05.559 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.559 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:05.559 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:05.820 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:05.820 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:06.080 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.080 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:06.080 13:02:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.340 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:06.340 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:06.601 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:06.601 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:06.601 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.861 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:06.861 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.121 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:07.121 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.121 13:02:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.382 [2024-07-15 13:02:29.053819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.382 13:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:07.642 13:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:07.642 13:02:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:09.553 13:02:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:11.458 13:02:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:11.458 [global] 00:17:11.458 thread=1 00:17:11.458 invalidate=1 00:17:11.458 rw=write 00:17:11.458 time_based=1 00:17:11.458 runtime=1 00:17:11.458 ioengine=libaio 00:17:11.458 direct=1 00:17:11.458 bs=4096 00:17:11.458 iodepth=1 00:17:11.458 norandommap=0 00:17:11.458 numjobs=1 00:17:11.458 00:17:11.458 verify_dump=1 00:17:11.458 verify_backlog=512 00:17:11.458 verify_state_save=0 00:17:11.458 do_verify=1 00:17:11.458 verify=crc32c-intel 00:17:11.458 [job0] 00:17:11.458 filename=/dev/nvme0n1 00:17:11.458 [job1] 00:17:11.458 filename=/dev/nvme0n2 00:17:11.458 [job2] 00:17:11.458 filename=/dev/nvme0n3 00:17:11.458 [job3] 00:17:11.458 filename=/dev/nvme0n4 00:17:11.458 Could not set queue depth (nvme0n1) 00:17:11.458 Could not set queue depth (nvme0n2) 00:17:11.458 Could not set queue depth (nvme0n3) 00:17:11.458 Could not set queue depth (nvme0n4) 00:17:11.717 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.717 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.717 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.717 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:11.717 fio-3.35 00:17:11.717 Starting 4 threads 00:17:13.112 00:17:13.112 job0: (groupid=0, jobs=1): err= 0: pid=668815: Mon Jul 15 13:02:34 2024 00:17:13.112 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:13.112 slat (nsec): min=6907, max=43842, avg=26208.96, stdev=1394.81 00:17:13.112 clat (usec): min=650, max=1260, avg=926.45, stdev=112.23 00:17:13.112 lat (usec): min=677, max=1286, avg=952.66, stdev=112.19 00:17:13.112 clat percentiles (usec): 00:17:13.112 | 1.00th=[ 676], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 824], 00:17:13.112 | 30.00th=[ 865], 40.00th=[ 906], 50.00th=[ 938], 60.00th=[ 963], 00:17:13.112 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1106], 00:17:13.112 | 99.00th=[ 1156], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:17:13.112 | 99.99th=[ 1254] 00:17:13.112 write: IOPS=908, BW=3632KiB/s (3720kB/s)(3636KiB/1001msec); 0 zone resets 00:17:13.112 slat (nsec): min=8897, max=54287, avg=30330.68, stdev=9830.50 00:17:13.112 clat (usec): min=251, max=1660, avg=521.83, stdev=120.93 00:17:13.112 lat (usec): min=273, max=1670, avg=552.16, stdev=123.17 00:17:13.112 clat percentiles (usec): 00:17:13.112 | 1.00th=[ 297], 5.00th=[ 355], 10.00th=[ 392], 20.00th=[ 424], 00:17:13.112 | 30.00th=[ 461], 40.00th=[ 490], 50.00th=[ 510], 60.00th=[ 529], 00:17:13.112 | 70.00th=[ 553], 80.00th=[ 611], 90.00th=[ 676], 95.00th=[ 734], 00:17:13.112 | 99.00th=[ 848], 99.50th=[ 922], 99.90th=[ 1663], 99.95th=[ 1663], 00:17:13.112 | 99.99th=[ 1663] 00:17:13.112 bw ( KiB/s): min= 4096, max= 4096, per=36.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:13.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:13.112 lat (usec) : 500=28.15%, 750=35.54%, 1000=26.32% 00:17:13.112 lat (msec) : 2=9.99% 00:17:13.112 cpu : usr=3.50%, sys=4.80%, ctx=1423, majf=0, minf=1 00:17:13.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.112 issued rwts: total=512,909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.112 job1: (groupid=0, jobs=1): err= 0: pid=668833: Mon Jul 15 13:02:34 2024 00:17:13.112 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1040msec) 00:17:13.112 slat (nsec): min=25247, max=26417, avg=25532.00, stdev=265.75 00:17:13.112 clat (usec): min=959, max=43035, avg=39650.93, stdev=9975.54 00:17:13.112 lat (usec): min=985, max=43061, avg=39676.46, stdev=9975.49 00:17:13.112 clat percentiles (usec): 00:17:13.112 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41681], 20.00th=[41681], 00:17:13.112 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:13.112 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:17:13.112 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:13.112 | 99.99th=[43254] 00:17:13.112 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:17:13.112 slat (nsec): min=8886, max=50257, avg=28930.48, stdev=9430.29 00:17:13.112 clat (usec): min=332, max=931, avg=676.95, stdev=112.25 00:17:13.112 lat (usec): min=365, max=963, avg=705.88, stdev=116.47 00:17:13.112 clat percentiles (usec): 00:17:13.112 | 1.00th=[ 404], 5.00th=[ 478], 10.00th=[ 519], 20.00th=[ 578], 00:17:13.112 | 30.00th=[ 619], 40.00th=[ 660], 50.00th=[ 693], 60.00th=[ 717], 00:17:13.112 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 816], 95.00th=[ 848], 00:17:13.112 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 930], 00:17:13.112 | 99.99th=[ 930] 00:17:13.112 bw ( KiB/s): min= 4096, max= 4096, per=36.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:13.112 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:13.112 lat (usec) : 500=7.37%, 750=61.81%, 1000=27.79% 00:17:13.112 lat (msec) : 50=3.02% 00:17:13.112 cpu : usr=0.87%, sys=1.92%, ctx=529, majf=0, minf=1 00:17:13.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.112 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.112 job2: (groupid=0, jobs=1): err= 0: pid=668852: Mon Jul 15 13:02:34 2024 00:17:13.112 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:13.112 slat (nsec): min=6741, max=59914, avg=24086.45, stdev=6157.53 00:17:13.112 clat (usec): min=417, max=1273, avg=824.45, stdev=144.90 00:17:13.112 lat (usec): min=443, max=1299, avg=848.53, stdev=145.96 00:17:13.112 clat percentiles (usec): 00:17:13.113 | 1.00th=[ 545], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 701], 00:17:13.113 | 30.00th=[ 725], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 840], 00:17:13.113 | 70.00th=[ 881], 80.00th=[ 963], 90.00th=[ 1045], 95.00th=[ 1090], 00:17:13.113 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:17:13.113 | 99.99th=[ 1270] 00:17:13.113 write: IOPS=950, BW=3800KiB/s (3891kB/s)(3804KiB/1001msec); 0 zone resets 00:17:13.113 slat (nsec): min=9755, max=71448, avg=31846.43, stdev=8116.18 00:17:13.113 clat (usec): min=151, max=893, avg=551.71, stdev=123.18 00:17:13.113 lat (usec): min=163, max=927, avg=583.56, stdev=125.28 00:17:13.113 clat percentiles (usec): 00:17:13.113 | 1.00th=[ 277], 5.00th=[ 338], 10.00th=[ 400], 20.00th=[ 445], 00:17:13.113 | 30.00th=[ 486], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 578], 00:17:13.113 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 750], 00:17:13.113 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 898], 99.95th=[ 898], 00:17:13.113 | 99.99th=[ 898] 00:17:13.113 bw ( KiB/s): min= 4096, max= 4096, per=36.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:13.113 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:13.113 lat (usec) : 250=0.48%, 500=20.98%, 750=52.49%, 1000=20.85% 00:17:13.113 lat (msec) : 2=5.19% 00:17:13.113 cpu : usr=2.70%, sys=3.80%, ctx=1464, majf=0, minf=1 00:17:13.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.113 issued rwts: total=512,951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.113 job3: (groupid=0, jobs=1): err= 0: pid=668859: Mon Jul 15 13:02:34 2024 00:17:13.113 read: IOPS=15, BW=61.9KiB/s (63.4kB/s)(64.0KiB/1034msec) 00:17:13.113 slat (nsec): min=9751, max=26576, avg=24435.62, stdev=3932.31 00:17:13.113 clat (usec): min=41064, max=43008, avg=41970.08, stdev=360.81 00:17:13.113 lat (usec): min=41074, max=43033, avg=41994.52, stdev=363.41 00:17:13.113 clat percentiles (usec): 00:17:13.113 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:13.113 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:13.113 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:17:13.113 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:13.113 | 99.99th=[43254] 00:17:13.113 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:17:13.113 slat (nsec): min=9783, max=54935, avg=28764.80, stdev=9970.66 00:17:13.113 clat (usec): min=331, max=1245, avg=670.50, stdev=125.57 00:17:13.113 lat (usec): min=344, max=1278, avg=699.27, stdev=129.94 00:17:13.113 clat percentiles (usec): 00:17:13.113 | 1.00th=[ 375], 5.00th=[ 449], 10.00th=[ 494], 20.00th=[ 562], 00:17:13.113 | 30.00th=[ 611], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[ 709], 00:17:13.113 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 857], 00:17:13.113 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1254], 99.95th=[ 1254], 00:17:13.113 | 99.99th=[ 1254] 00:17:13.113 bw ( KiB/s): min= 4096, max= 4096, per=36.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:13.113 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:13.113 lat (usec) : 500=10.42%, 750=59.28%, 1000=27.08% 00:17:13.113 lat (msec) : 2=0.19%, 50=3.03% 00:17:13.113 cpu : usr=0.87%, sys=1.26%, ctx=529, majf=0, minf=1 00:17:13.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:13.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.113 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:13.113 00:17:13.113 Run status group 0 (all jobs): 00:17:13.113 READ: bw=4065KiB/s (4163kB/s), 61.9KiB/s-2046KiB/s (63.4kB/s-2095kB/s), io=4228KiB (4329kB), run=1001-1040msec 00:17:13.113 WRITE: bw=10.8MiB/s (11.4MB/s), 1969KiB/s-3800KiB/s (2016kB/s-3891kB/s), io=11.3MiB (11.8MB), run=1001-1040msec 00:17:13.113 00:17:13.113 Disk stats (read/write): 00:17:13.113 nvme0n1: ios=534/649, merge=0/0, ticks=1289/270, in_queue=1559, util=84.27% 00:17:13.113 nvme0n2: ios=62/512, merge=0/0, ticks=563/285, in_queue=848, util=91.12% 00:17:13.113 nvme0n3: ios=534/672, merge=0/0, ticks=1301/342, in_queue=1643, util=92.39% 00:17:13.113 nvme0n4: ios=72/512, merge=0/0, ticks=729/326, in_queue=1055, util=94.87% 00:17:13.113 13:02:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:13.113 [global] 00:17:13.113 thread=1 00:17:13.113 invalidate=1 00:17:13.113 rw=randwrite 00:17:13.113 time_based=1 00:17:13.113 runtime=1 00:17:13.113 ioengine=libaio 00:17:13.113 direct=1 00:17:13.113 bs=4096 00:17:13.113 iodepth=1 00:17:13.113 norandommap=0 00:17:13.113 numjobs=1 00:17:13.113 00:17:13.113 verify_dump=1 00:17:13.113 verify_backlog=512 00:17:13.113 verify_state_save=0 00:17:13.113 do_verify=1 00:17:13.113 verify=crc32c-intel 00:17:13.113 [job0] 00:17:13.113 filename=/dev/nvme0n1 00:17:13.113 [job1] 00:17:13.113 filename=/dev/nvme0n2 00:17:13.113 [job2] 00:17:13.113 filename=/dev/nvme0n3 00:17:13.113 [job3] 00:17:13.113 filename=/dev/nvme0n4 00:17:13.113 Could not set queue depth (nvme0n1) 00:17:13.113 Could not set queue depth (nvme0n2) 00:17:13.113 Could not set queue depth (nvme0n3) 00:17:13.113 Could not set queue depth (nvme0n4) 00:17:13.373 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.373 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.373 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.373 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:13.373 fio-3.35 00:17:13.373 Starting 4 threads 00:17:14.763 00:17:14.763 job0: (groupid=0, jobs=1): err= 0: pid=669297: Mon Jul 15 13:02:36 2024 00:17:14.763 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:14.763 slat (nsec): min=6620, max=54989, avg=24109.49, stdev=4320.52 00:17:14.763 clat (usec): min=441, max=1459, avg=1032.91, stdev=109.78 00:17:14.763 lat (usec): min=453, max=1483, avg=1057.02, stdev=110.37 00:17:14.763 clat percentiles (usec): 00:17:14.763 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 889], 20.00th=[ 963], 00:17:14.763 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:17:14.763 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:17:14.763 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1467], 99.95th=[ 1467], 00:17:14.763 | 99.99th=[ 1467] 00:17:14.763 write: IOPS=713, BW=2853KiB/s (2922kB/s)(2856KiB/1001msec); 0 zone resets 00:17:14.763 slat (nsec): min=8989, max=65644, avg=26885.96, stdev=8637.41 00:17:14.763 clat (usec): min=297, max=918, avg=599.85, stdev=118.35 00:17:14.763 lat (usec): min=316, max=951, avg=626.73, stdev=120.81 00:17:14.763 clat percentiles (usec): 00:17:14.763 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 433], 20.00th=[ 510], 00:17:14.763 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:17:14.763 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 791], 00:17:14.763 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 922], 99.95th=[ 922], 00:17:14.763 | 99.99th=[ 922] 00:17:14.763 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.763 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.763 lat (usec) : 500=10.85%, 750=42.33%, 1000=17.70% 00:17:14.763 lat (msec) : 2=29.12% 00:17:14.763 cpu : usr=1.70%, sys=3.40%, ctx=1230, majf=0, minf=1 00:17:14.763 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.763 issued rwts: total=512,714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.763 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.763 job1: (groupid=0, jobs=1): err= 0: pid=669311: Mon Jul 15 13:02:36 2024 00:17:14.763 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:14.763 slat (nsec): min=7155, max=60192, avg=25507.00, stdev=3488.85 00:17:14.763 clat (usec): min=482, max=1328, avg=1044.40, stdev=101.90 00:17:14.763 lat (usec): min=507, max=1388, avg=1069.91, stdev=102.20 00:17:14.763 clat percentiles (usec): 00:17:14.763 | 1.00th=[ 734], 5.00th=[ 865], 10.00th=[ 938], 20.00th=[ 971], 00:17:14.763 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:17:14.763 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:17:14.763 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:17:14.763 | 99.99th=[ 1336] 00:17:14.763 write: IOPS=628, BW=2513KiB/s (2574kB/s)(2516KiB/1001msec); 0 zone resets 00:17:14.763 slat (nsec): min=9245, max=53059, avg=28939.34, stdev=8614.80 00:17:14.763 clat (usec): min=288, max=1005, avg=674.17, stdev=123.32 00:17:14.763 lat (usec): min=306, max=1051, avg=703.11, stdev=126.30 00:17:14.763 clat percentiles (usec): 00:17:14.763 | 1.00th=[ 400], 5.00th=[ 465], 10.00th=[ 519], 20.00th=[ 578], 00:17:14.763 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 701], 00:17:14.763 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 889], 00:17:14.763 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1004], 99.95th=[ 1004], 00:17:14.763 | 99.99th=[ 1004] 00:17:14.763 bw ( KiB/s): min= 928, max= 4096, per=25.30%, avg=2512.00, stdev=2240.11, samples=2 00:17:14.763 iops : min= 232, max= 1024, avg=628.00, stdev=560.03, samples=2 00:17:14.763 lat (usec) : 500=4.38%, 750=36.90%, 1000=25.94% 00:17:14.763 lat (msec) : 2=32.78% 00:17:14.763 cpu : usr=1.30%, sys=3.60%, ctx=1142, majf=0, minf=1 00:17:14.763 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 issued rwts: total=512,629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.764 job2: (groupid=0, jobs=1): err= 0: pid=669329: Mon Jul 15 13:02:36 2024 00:17:14.764 read: IOPS=16, BW=65.9KiB/s (67.5kB/s)(68.0KiB/1032msec) 00:17:14.764 slat (nsec): min=24528, max=25570, avg=24839.18, stdev=240.38 00:17:14.764 clat (usec): min=1265, max=43027, avg=39812.83, stdev=9943.08 00:17:14.764 lat (usec): min=1290, max=43051, avg=39837.67, stdev=9943.08 00:17:14.764 clat percentiles (usec): 00:17:14.764 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[41681], 20.00th=[41681], 00:17:14.764 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:14.764 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:17:14.764 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:14.764 | 99.99th=[43254] 00:17:14.764 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:14.764 slat (nsec): min=9494, max=50275, avg=27427.77, stdev=9253.40 00:17:14.764 clat (usec): min=308, max=929, avg=654.00, stdev=113.59 00:17:14.764 lat (usec): min=321, max=962, avg=681.43, stdev=117.64 00:17:14.764 clat percentiles (usec): 00:17:14.764 | 1.00th=[ 392], 5.00th=[ 429], 10.00th=[ 498], 20.00th=[ 553], 00:17:14.764 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 693], 00:17:14.764 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 824], 00:17:14.764 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 930], 00:17:14.764 | 99.99th=[ 930] 00:17:14.764 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.764 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.764 lat (usec) : 500=10.02%, 750=66.73%, 1000=20.04% 00:17:14.764 lat (msec) : 2=0.19%, 50=3.02% 00:17:14.764 cpu : usr=0.68%, sys=1.36%, ctx=530, majf=0, minf=1 00:17:14.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.764 job3: (groupid=0, jobs=1): err= 0: pid=669335: Mon Jul 15 13:02:36 2024 00:17:14.764 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:14.764 slat (nsec): min=26168, max=44486, avg=27082.43, stdev=2413.51 00:17:14.764 clat (usec): min=664, max=1247, avg=998.92, stdev=99.50 00:17:14.764 lat (usec): min=692, max=1278, avg=1026.00, stdev=99.65 00:17:14.764 clat percentiles (usec): 00:17:14.764 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 922], 00:17:14.764 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1037], 00:17:14.764 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:17:14.764 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:17:14.764 | 99.99th=[ 1254] 00:17:14.764 write: IOPS=706, BW=2825KiB/s (2893kB/s)(2828KiB/1001msec); 0 zone resets 00:17:14.764 slat (nsec): min=8894, max=67122, avg=30582.60, stdev=9345.63 00:17:14.764 clat (usec): min=247, max=901, avg=626.21, stdev=135.39 00:17:14.764 lat (usec): min=258, max=933, avg=656.79, stdev=139.51 00:17:14.764 clat percentiles (usec): 00:17:14.764 | 1.00th=[ 297], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 515], 00:17:14.764 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 676], 00:17:14.764 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 816], 00:17:14.764 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 906], 99.95th=[ 906], 00:17:14.764 | 99.99th=[ 906] 00:17:14.764 bw ( KiB/s): min= 4096, max= 4096, per=41.25%, avg=4096.00, stdev= 0.00, samples=1 00:17:14.764 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:14.764 lat (usec) : 250=0.08%, 500=10.34%, 750=37.90%, 1000=29.94% 00:17:14.764 lat (msec) : 2=21.74% 00:17:14.764 cpu : usr=2.40%, sys=4.90%, ctx=1224, majf=0, minf=1 00:17:14.764 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.764 issued rwts: total=512,707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.764 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.764 00:17:14.764 Run status group 0 (all jobs): 00:17:14.764 READ: bw=6019KiB/s (6164kB/s), 65.9KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1032msec 00:17:14.764 WRITE: bw=9930KiB/s (10.2MB/s), 1984KiB/s-2853KiB/s (2032kB/s-2922kB/s), io=10.0MiB (10.5MB), run=1001-1032msec 00:17:14.764 00:17:14.764 Disk stats (read/write): 00:17:14.764 nvme0n1: ios=529/512, merge=0/0, ticks=536/292, in_queue=828, util=87.17% 00:17:14.764 nvme0n2: ios=480/512, merge=0/0, ticks=563/327, in_queue=890, util=91.24% 00:17:14.764 nvme0n3: ios=34/512, merge=0/0, ticks=1358/328, in_queue=1686, util=92.62% 00:17:14.764 nvme0n4: ios=534/512, merge=0/0, ticks=565/257, in_queue=822, util=96.91% 00:17:14.764 13:02:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:14.764 [global] 00:17:14.764 thread=1 00:17:14.764 invalidate=1 00:17:14.764 rw=write 00:17:14.764 time_based=1 00:17:14.764 runtime=1 00:17:14.764 ioengine=libaio 00:17:14.764 direct=1 00:17:14.764 bs=4096 00:17:14.764 iodepth=128 00:17:14.764 norandommap=0 00:17:14.764 numjobs=1 00:17:14.764 00:17:14.764 verify_dump=1 00:17:14.764 verify_backlog=512 00:17:14.764 verify_state_save=0 00:17:14.764 do_verify=1 00:17:14.764 verify=crc32c-intel 00:17:14.764 [job0] 00:17:14.764 filename=/dev/nvme0n1 00:17:14.764 [job1] 00:17:14.764 filename=/dev/nvme0n2 00:17:14.764 [job2] 00:17:14.764 filename=/dev/nvme0n3 00:17:14.764 [job3] 00:17:14.764 filename=/dev/nvme0n4 00:17:14.764 Could not set queue depth (nvme0n1) 00:17:14.764 Could not set queue depth (nvme0n2) 00:17:14.764 Could not set queue depth (nvme0n3) 00:17:14.764 Could not set queue depth (nvme0n4) 00:17:15.071 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:15.071 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:15.071 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:15.071 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:15.071 fio-3.35 00:17:15.071 Starting 4 threads 00:17:16.032 00:17:16.032 job0: (groupid=0, jobs=1): err= 0: pid=669800: Mon Jul 15 13:02:37 2024 00:17:16.032 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:17:16.032 slat (nsec): min=852, max=11618k, avg=69841.80, stdev=450714.02 00:17:16.032 clat (usec): min=3964, max=35116, avg=9275.95, stdev=3175.14 00:17:16.032 lat (usec): min=3972, max=35121, avg=9345.79, stdev=3204.55 00:17:16.032 clat percentiles (usec): 00:17:16.032 | 1.00th=[ 5080], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7504], 00:17:16.032 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8717], 00:17:16.032 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12649], 95.00th=[15139], 00:17:16.032 | 99.00th=[21103], 99.50th=[21365], 99.90th=[33424], 99.95th=[33424], 00:17:16.032 | 99.99th=[34866] 00:17:16.032 write: IOPS=7334, BW=28.6MiB/s (30.0MB/s)(28.7MiB/1002msec); 0 zone resets 00:17:16.032 slat (nsec): min=1492, max=6383.1k, avg=56565.96, stdev=350004.87 00:17:16.032 clat (usec): min=626, max=61235, avg=8264.77, stdev=4837.16 00:17:16.032 lat (usec): min=634, max=61243, avg=8321.33, stdev=4843.35 00:17:16.032 clat percentiles (usec): 00:17:16.032 | 1.00th=[ 1270], 5.00th=[ 2507], 10.00th=[ 5538], 20.00th=[ 6456], 00:17:16.032 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:17:16.032 | 70.00th=[ 8160], 80.00th=[ 8848], 90.00th=[11207], 95.00th=[13304], 00:17:16.032 | 99.00th=[34341], 99.50th=[45876], 99.90th=[55837], 99.95th=[61080], 00:17:16.032 | 99.99th=[61080] 00:17:16.032 bw ( KiB/s): min=26960, max=30816, per=31.01%, avg=28888.00, stdev=2726.60, samples=2 00:17:16.032 iops : min= 6740, max= 7704, avg=7222.00, stdev=681.65, samples=2 00:17:16.032 lat (usec) : 750=0.01%, 1000=0.05% 00:17:16.032 lat (msec) : 2=1.41%, 4=1.81%, 10=78.94%, 20=16.00%, 50=1.63% 00:17:16.032 lat (msec) : 100=0.15% 00:17:16.032 cpu : usr=4.30%, sys=5.19%, ctx=613, majf=0, minf=1 00:17:16.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:16.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.033 issued rwts: total=7168,7349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.033 job1: (groupid=0, jobs=1): err= 0: pid=669815: Mon Jul 15 13:02:37 2024 00:17:16.033 read: IOPS=3225, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1002msec) 00:17:16.033 slat (nsec): min=919, max=16531k, avg=169780.98, stdev=950210.81 00:17:16.033 clat (usec): min=1283, max=54347, avg=20673.70, stdev=9770.33 00:17:16.033 lat (usec): min=6152, max=54354, avg=20843.48, stdev=9804.24 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 6587], 5.00th=[10814], 10.00th=[11469], 20.00th=[12125], 00:17:16.033 | 30.00th=[13960], 40.00th=[15139], 50.00th=[17433], 60.00th=[21103], 00:17:16.033 | 70.00th=[23725], 80.00th=[29754], 90.00th=[34341], 95.00th=[40633], 00:17:16.033 | 99.00th=[49546], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:17:16.033 | 99.99th=[54264] 00:17:16.033 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:17:16.033 slat (nsec): min=1554, max=10975k, avg=121061.61, stdev=532034.60 00:17:16.033 clat (usec): min=7774, max=38212, avg=16568.01, stdev=5855.90 00:17:16.033 lat (usec): min=7864, max=38234, avg=16689.07, stdev=5864.41 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10552], 20.00th=[11600], 00:17:16.033 | 30.00th=[13435], 40.00th=[14484], 50.00th=[15139], 60.00th=[15926], 00:17:16.033 | 70.00th=[17433], 80.00th=[20841], 90.00th=[23725], 95.00th=[30540], 00:17:16.033 | 99.00th=[34341], 99.50th=[36439], 99.90th=[38011], 99.95th=[38011], 00:17:16.033 | 99.99th=[38011] 00:17:16.033 bw ( KiB/s): min=13376, max=15296, per=15.39%, avg=14336.00, stdev=1357.65, samples=2 00:17:16.033 iops : min= 3344, max= 3824, avg=3584.00, stdev=339.41, samples=2 00:17:16.033 lat (msec) : 2=0.01%, 10=2.89%, 20=63.54%, 50=33.11%, 100=0.44% 00:17:16.033 cpu : usr=1.60%, sys=4.10%, ctx=455, majf=0, minf=1 00:17:16.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:16.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.033 issued rwts: total=3232,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.033 job2: (groupid=0, jobs=1): err= 0: pid=669832: Mon Jul 15 13:02:37 2024 00:17:16.033 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:17:16.033 slat (nsec): min=961, max=9387.1k, avg=74423.22, stdev=551650.10 00:17:16.033 clat (usec): min=3365, max=19683, avg=9814.56, stdev=2560.42 00:17:16.033 lat (usec): min=3372, max=19715, avg=9888.99, stdev=2594.93 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 4883], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7701], 00:17:16.033 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10159], 00:17:16.033 | 70.00th=[10421], 80.00th=[11600], 90.00th=[13304], 95.00th=[14746], 00:17:16.033 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:17:16.033 | 99.99th=[19792] 00:17:16.033 write: IOPS=6872, BW=26.8MiB/s (28.1MB/s)(27.1MiB/1009msec); 0 zone resets 00:17:16.033 slat (nsec): min=1679, max=8368.4k, avg=65559.95, stdev=363256.56 00:17:16.033 clat (usec): min=692, max=19178, avg=9006.33, stdev=3367.61 00:17:16.033 lat (usec): min=905, max=19180, avg=9071.89, stdev=3383.34 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 2540], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 6194], 00:17:16.033 | 30.00th=[ 7111], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9765], 00:17:16.033 | 70.00th=[10421], 80.00th=[10814], 90.00th=[14615], 95.00th=[15795], 00:17:16.033 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18744], 99.95th=[19006], 00:17:16.033 | 99.99th=[19268] 00:17:16.033 bw ( KiB/s): min=24072, max=30384, per=29.23%, avg=27228.00, stdev=4463.26, samples=2 00:17:16.033 iops : min= 6018, max= 7596, avg=6807.00, stdev=1115.81, samples=2 00:17:16.033 lat (usec) : 750=0.01%, 1000=0.01% 00:17:16.033 lat (msec) : 2=0.31%, 4=1.98%, 10=57.93%, 20=39.76% 00:17:16.033 cpu : usr=4.76%, sys=7.44%, ctx=688, majf=0, minf=1 00:17:16.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:16.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.033 issued rwts: total=6656,6934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.033 job3: (groupid=0, jobs=1): err= 0: pid=669839: Mon Jul 15 13:02:37 2024 00:17:16.033 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:17:16.033 slat (nsec): min=913, max=9009.0k, avg=85746.25, stdev=508516.34 00:17:16.033 clat (usec): min=1276, max=27243, avg=10649.47, stdev=3050.30 00:17:16.033 lat (usec): min=2260, max=27248, avg=10735.22, stdev=3091.36 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[ 8356], 00:17:16.033 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10683], 00:17:16.033 | 70.00th=[11469], 80.00th=[12125], 90.00th=[14615], 95.00th=[15664], 00:17:16.033 | 99.00th=[23987], 99.50th=[23987], 99.90th=[24249], 99.95th=[24249], 00:17:16.033 | 99.99th=[27132] 00:17:16.033 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:17:16.033 slat (nsec): min=1632, max=34752k, avg=89316.70, stdev=625769.25 00:17:16.033 clat (usec): min=685, max=37515, avg=11617.19, stdev=3740.94 00:17:16.033 lat (usec): min=762, max=41964, avg=11706.50, stdev=3784.62 00:17:16.033 clat percentiles (usec): 00:17:16.033 | 1.00th=[ 4424], 5.00th=[ 5669], 10.00th=[ 7308], 20.00th=[ 8455], 00:17:16.033 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[12125], 60.00th=[13042], 00:17:16.033 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15533], 95.00th=[16909], 00:17:16.033 | 99.00th=[22414], 99.50th=[24511], 99.90th=[37487], 99.95th=[37487], 00:17:16.033 | 99.99th=[37487] 00:17:16.033 bw ( KiB/s): min=20480, max=24576, per=24.18%, avg=22528.00, stdev=2896.31, samples=2 00:17:16.033 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:17:16.033 lat (usec) : 750=0.02%, 1000=0.03% 00:17:16.033 lat (msec) : 2=0.05%, 4=0.07%, 10=42.28%, 20=55.64%, 50=1.90% 00:17:16.033 cpu : usr=4.00%, sys=4.50%, ctx=607, majf=0, minf=1 00:17:16.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:16.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.033 issued rwts: total=5372,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.033 00:17:16.033 Run status group 0 (all jobs): 00:17:16.033 READ: bw=86.8MiB/s (91.0MB/s), 12.6MiB/s-27.9MiB/s (13.2MB/s-29.3MB/s), io=87.6MiB (91.9MB), run=1002-1009msec 00:17:16.033 WRITE: bw=91.0MiB/s (95.4MB/s), 14.0MiB/s-28.6MiB/s (14.7MB/s-30.0MB/s), io=91.8MiB (96.3MB), run=1002-1009msec 00:17:16.033 00:17:16.033 Disk stats (read/write): 00:17:16.033 nvme0n1: ios=5839/6144, merge=0/0, ticks=27240/25484, in_queue=52724, util=94.09% 00:17:16.033 nvme0n2: ios=2595/3071, merge=0/0, ticks=13953/12044, in_queue=25997, util=87.77% 00:17:16.033 nvme0n3: ios=5690/5807, merge=0/0, ticks=51915/49828, in_queue=101743, util=100.00% 00:17:16.033 nvme0n4: ios=4643/4799, merge=0/0, ticks=24910/26436, in_queue=51346, util=98.83% 00:17:16.033 13:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:16.295 [global] 00:17:16.295 thread=1 00:17:16.295 invalidate=1 00:17:16.295 rw=randwrite 00:17:16.295 time_based=1 00:17:16.295 runtime=1 00:17:16.295 ioengine=libaio 00:17:16.295 direct=1 00:17:16.295 bs=4096 00:17:16.295 iodepth=128 00:17:16.295 norandommap=0 00:17:16.295 numjobs=1 00:17:16.295 00:17:16.295 verify_dump=1 00:17:16.295 verify_backlog=512 00:17:16.295 verify_state_save=0 00:17:16.295 do_verify=1 00:17:16.295 verify=crc32c-intel 00:17:16.295 [job0] 00:17:16.295 filename=/dev/nvme0n1 00:17:16.295 [job1] 00:17:16.295 filename=/dev/nvme0n2 00:17:16.295 [job2] 00:17:16.295 filename=/dev/nvme0n3 00:17:16.295 [job3] 00:17:16.295 filename=/dev/nvme0n4 00:17:16.295 Could not set queue depth (nvme0n1) 00:17:16.295 Could not set queue depth (nvme0n2) 00:17:16.295 Could not set queue depth (nvme0n3) 00:17:16.295 Could not set queue depth (nvme0n4) 00:17:16.556 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.556 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.556 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.556 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:16.556 fio-3.35 00:17:16.556 Starting 4 threads 00:17:17.943 00:17:17.943 job0: (groupid=0, jobs=1): err= 0: pid=670299: Mon Jul 15 13:02:39 2024 00:17:17.943 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:17:17.943 slat (nsec): min=962, max=10515k, avg=86712.26, stdev=614581.52 00:17:17.943 clat (usec): min=4039, max=26572, avg=10437.83, stdev=3755.15 00:17:17.943 lat (usec): min=4047, max=26600, avg=10524.54, stdev=3813.34 00:17:17.943 clat percentiles (usec): 00:17:17.943 | 1.00th=[ 5342], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 6718], 00:17:17.943 | 30.00th=[ 6980], 40.00th=[ 8455], 50.00th=[ 9896], 60.00th=[11600], 00:17:17.943 | 70.00th=[12518], 80.00th=[13960], 90.00th=[15926], 95.00th=[17433], 00:17:17.943 | 99.00th=[20579], 99.50th=[21890], 99.90th=[24511], 99.95th=[24511], 00:17:17.943 | 99.99th=[26608] 00:17:17.943 write: IOPS=4782, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1012msec); 0 zone resets 00:17:17.943 slat (nsec): min=1619, max=12710k, avg=117977.00, stdev=570378.19 00:17:17.943 clat (usec): min=2142, max=60513, avg=16578.96, stdev=11629.78 00:17:17.943 lat (usec): min=2150, max=60524, avg=16696.94, stdev=11702.86 00:17:17.943 clat percentiles (usec): 00:17:17.943 | 1.00th=[ 4015], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[10552], 00:17:17.943 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:17:17.943 | 70.00th=[15926], 80.00th=[21103], 90.00th=[30802], 95.00th=[47973], 00:17:17.943 | 99.00th=[58459], 99.50th=[58983], 99.90th=[60556], 99.95th=[60556], 00:17:17.943 | 99.99th=[60556] 00:17:17.943 bw ( KiB/s): min=18416, max=19288, per=21.38%, avg=18852.00, stdev=616.60, samples=2 00:17:17.943 iops : min= 4604, max= 4822, avg=4713.00, stdev=154.15, samples=2 00:17:17.943 lat (msec) : 4=0.51%, 10=34.19%, 20=53.33%, 50=9.70%, 100=2.28% 00:17:17.943 cpu : usr=4.95%, sys=4.95%, ctx=565, majf=0, minf=1 00:17:17.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:17.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.944 issued rwts: total=4608,4840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.944 job1: (groupid=0, jobs=1): err= 0: pid=670313: Mon Jul 15 13:02:39 2024 00:17:17.944 read: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(22.0MiB/1012msec) 00:17:17.944 slat (nsec): min=873, max=14834k, avg=84237.17, stdev=656537.76 00:17:17.944 clat (usec): min=3399, max=31152, avg=10991.55, stdev=4243.15 00:17:17.944 lat (usec): min=3402, max=31177, avg=11075.79, stdev=4299.88 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7046], 00:17:17.944 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[10552], 60.00th=[11469], 00:17:17.944 | 70.00th=[13042], 80.00th=[14615], 90.00th=[16909], 95.00th=[17695], 00:17:17.944 | 99.00th=[23462], 99.50th=[25035], 99.90th=[29492], 99.95th=[29492], 00:17:17.944 | 99.99th=[31065] 00:17:17.944 write: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(23.7MiB/1012msec); 0 zone resets 00:17:17.944 slat (nsec): min=1461, max=11285k, avg=81433.57, stdev=511803.19 00:17:17.944 clat (usec): min=1268, max=63587, avg=10918.66, stdev=8649.15 00:17:17.944 lat (usec): min=1276, max=65209, avg=11000.09, stdev=8700.54 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 2900], 5.00th=[ 3982], 10.00th=[ 4359], 20.00th=[ 5538], 00:17:17.944 | 30.00th=[ 6128], 40.00th=[ 6849], 50.00th=[ 8848], 60.00th=[10421], 00:17:17.944 | 70.00th=[12125], 80.00th=[13698], 90.00th=[19006], 95.00th=[22676], 00:17:17.944 | 99.00th=[55837], 99.50th=[57410], 99.90th=[63701], 99.95th=[63701], 00:17:17.944 | 99.99th=[63701] 00:17:17.944 bw ( KiB/s): min=22720, max=24816, per=26.95%, avg=23768.00, stdev=1482.10, samples=2 00:17:17.944 iops : min= 5680, max= 6204, avg=5942.00, stdev=370.52, samples=2 00:17:17.944 lat (msec) : 2=0.04%, 4=2.97%, 10=49.84%, 20=40.75%, 50=5.44% 00:17:17.944 lat (msec) : 100=0.95% 00:17:17.944 cpu : usr=4.06%, sys=5.84%, ctx=444, majf=0, minf=1 00:17:17.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:17.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.944 issued rwts: total=5632,6070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.944 job2: (groupid=0, jobs=1): err= 0: pid=670331: Mon Jul 15 13:02:39 2024 00:17:17.944 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:17:17.944 slat (nsec): min=950, max=18328k, avg=76292.15, stdev=631727.38 00:17:17.944 clat (usec): min=3495, max=60220, avg=11381.23, stdev=6143.19 00:17:17.944 lat (usec): min=3499, max=63290, avg=11457.52, stdev=6200.05 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 4555], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 7308], 00:17:17.944 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10552], 00:17:17.944 | 70.00th=[11994], 80.00th=[13173], 90.00th=[19792], 95.00th=[24249], 00:17:17.944 | 99.00th=[35390], 99.50th=[38011], 99.90th=[56361], 99.95th=[56361], 00:17:17.944 | 99.99th=[60031] 00:17:17.944 write: IOPS=5832, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1011msec); 0 zone resets 00:17:17.944 slat (nsec): min=1595, max=10807k, avg=72427.28, stdev=484371.97 00:17:17.944 clat (usec): min=1236, max=43620, avg=10857.96, stdev=7931.67 00:17:17.944 lat (usec): min=1256, max=43627, avg=10930.39, stdev=7979.28 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 2245], 5.00th=[ 3982], 10.00th=[ 5014], 20.00th=[ 6259], 00:17:17.944 | 30.00th=[ 6915], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 8979], 00:17:17.944 | 70.00th=[11731], 80.00th=[12649], 90.00th=[18220], 95.00th=[33424], 00:17:17.944 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:17:17.944 | 99.99th=[43779] 00:17:17.944 bw ( KiB/s): min=21576, max=24576, per=26.17%, avg=23076.00, stdev=2121.32, samples=2 00:17:17.944 iops : min= 5394, max= 6144, avg=5769.00, stdev=530.33, samples=2 00:17:17.944 lat (msec) : 2=0.29%, 4=2.59%, 10=56.88%, 20=31.36%, 50=8.75% 00:17:17.944 lat (msec) : 100=0.11% 00:17:17.944 cpu : usr=4.16%, sys=7.13%, ctx=394, majf=0, minf=1 00:17:17.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:17.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.944 issued rwts: total=5632,5897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.944 job3: (groupid=0, jobs=1): err= 0: pid=670338: Mon Jul 15 13:02:39 2024 00:17:17.944 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:17:17.944 slat (nsec): min=975, max=13051k, avg=101270.89, stdev=664449.59 00:17:17.944 clat (usec): min=3963, max=66204, avg=11870.70, stdev=8150.77 00:17:17.944 lat (usec): min=3970, max=66236, avg=11971.97, stdev=8216.36 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 4817], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6915], 00:17:17.944 | 30.00th=[ 7504], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9896], 00:17:17.944 | 70.00th=[12125], 80.00th=[14746], 90.00th=[21890], 95.00th=[27919], 00:17:17.944 | 99.00th=[52167], 99.50th=[60031], 99.90th=[61080], 99.95th=[66323], 00:17:17.944 | 99.99th=[66323] 00:17:17.944 write: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(21.5MiB/1012msec); 0 zone resets 00:17:17.944 slat (nsec): min=1623, max=15758k, avg=80272.16, stdev=461755.39 00:17:17.944 clat (usec): min=2106, max=66195, avg=12248.49, stdev=7822.31 00:17:17.944 lat (usec): min=2114, max=66203, avg=12328.76, stdev=7858.59 00:17:17.944 clat percentiles (usec): 00:17:17.944 | 1.00th=[ 3294], 5.00th=[ 4555], 10.00th=[ 5014], 20.00th=[ 5669], 00:17:17.944 | 30.00th=[ 7308], 40.00th=[ 9634], 50.00th=[11994], 60.00th=[12387], 00:17:17.944 | 70.00th=[12780], 80.00th=[15139], 90.00th=[21103], 95.00th=[29754], 00:17:17.944 | 99.00th=[44303], 99.50th=[52167], 99.90th=[55837], 99.95th=[55837], 00:17:17.944 | 99.99th=[66323] 00:17:17.944 bw ( KiB/s): min=21168, max=21832, per=24.38%, avg=21500.00, stdev=469.52, samples=2 00:17:17.944 iops : min= 5292, max= 5458, avg=5375.00, stdev=117.38, samples=2 00:17:17.944 lat (msec) : 4=1.84%, 10=48.89%, 20=38.21%, 50=10.18%, 100=0.88% 00:17:17.944 cpu : usr=5.54%, sys=5.44%, ctx=562, majf=0, minf=1 00:17:17.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:17.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:17.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:17.944 issued rwts: total=5120,5502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:17.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:17.944 00:17:17.944 Run status group 0 (all jobs): 00:17:17.944 READ: bw=81.0MiB/s (85.0MB/s), 17.8MiB/s-21.8MiB/s (18.7MB/s-22.8MB/s), io=82.0MiB (86.0MB), run=1011-1012msec 00:17:17.944 WRITE: bw=86.1MiB/s (90.3MB/s), 18.7MiB/s-23.4MiB/s (19.6MB/s-24.6MB/s), io=87.1MiB (91.4MB), run=1011-1012msec 00:17:17.944 00:17:17.944 Disk stats (read/write): 00:17:17.944 nvme0n1: ios=4147/4167, merge=0/0, ticks=42331/58290, in_queue=100621, util=87.88% 00:17:17.944 nvme0n2: ios=5161/5381, merge=0/0, ticks=50622/45241, in_queue=95863, util=92.62% 00:17:17.944 nvme0n3: ios=4335/4608, merge=0/0, ticks=43454/48777, in_queue=92231, util=94.59% 00:17:17.944 nvme0n4: ios=4116/4103, merge=0/0, ticks=49923/51902, in_queue=101825, util=97.53% 00:17:17.944 13:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:17.944 13:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=670468 00:17:17.944 13:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:17.944 13:02:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:17.944 [global] 00:17:17.944 thread=1 00:17:17.944 invalidate=1 00:17:17.944 rw=read 00:17:17.944 time_based=1 00:17:17.944 runtime=10 00:17:17.944 ioengine=libaio 00:17:17.944 direct=1 00:17:17.944 bs=4096 00:17:17.944 iodepth=1 00:17:17.944 norandommap=1 00:17:17.944 numjobs=1 00:17:17.944 00:17:17.944 [job0] 00:17:17.944 filename=/dev/nvme0n1 00:17:17.944 [job1] 00:17:17.944 filename=/dev/nvme0n2 00:17:17.944 [job2] 00:17:17.944 filename=/dev/nvme0n3 00:17:17.944 [job3] 00:17:17.944 filename=/dev/nvme0n4 00:17:17.944 Could not set queue depth (nvme0n1) 00:17:17.944 Could not set queue depth (nvme0n2) 00:17:17.944 Could not set queue depth (nvme0n3) 00:17:17.944 Could not set queue depth (nvme0n4) 00:17:18.206 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.206 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.206 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.206 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.206 fio-3.35 00:17:18.206 Starting 4 threads 00:17:20.771 13:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:21.031 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9105408, buflen=4096 00:17:21.031 fio: pid=670837, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:21.031 13:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:21.290 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:17:21.290 fio: pid=670830, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:21.290 13:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.290 13:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:21.290 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.290 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:21.290 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4431872, buflen=4096 00:17:21.290 fio: pid=670813, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:21.552 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11046912, buflen=4096 00:17:21.552 fio: pid=670823, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:21.552 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.552 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:21.552 00:17:21.552 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=670813: Mon Jul 15 13:02:43 2024 00:17:21.552 read: IOPS=370, BW=1479KiB/s (1515kB/s)(4328KiB/2926msec) 00:17:21.552 slat (usec): min=6, max=18435, avg=56.57, stdev=750.03 00:17:21.552 clat (usec): min=687, max=43097, avg=2639.88, stdev=7453.88 00:17:21.552 lat (usec): min=724, max=43122, avg=2681.27, stdev=7471.17 00:17:21.552 clat percentiles (usec): 00:17:21.552 | 1.00th=[ 1029], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1188], 00:17:21.552 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1254], 00:17:21.552 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1319], 95.00th=[ 1369], 00:17:21.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:17:21.552 | 99.99th=[43254] 00:17:21.552 bw ( KiB/s): min= 840, max= 2368, per=19.51%, avg=1553.60, stdev=556.20, samples=5 00:17:21.552 iops : min= 210, max= 592, avg=388.40, stdev=139.05, samples=5 00:17:21.552 lat (usec) : 750=0.09%, 1000=0.28% 00:17:21.552 lat (msec) : 2=96.12%, 50=3.42% 00:17:21.552 cpu : usr=0.48%, sys=0.99%, ctx=1085, majf=0, minf=1 00:17:21.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.552 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=670823: Mon Jul 15 13:02:43 2024 00:17:21.552 read: IOPS=884, BW=3537KiB/s (3622kB/s)(10.5MiB/3050msec) 00:17:21.552 slat (usec): min=6, max=13154, avg=38.11, stdev=335.87 00:17:21.552 clat (usec): min=690, max=1295, avg=1085.89, stdev=65.21 00:17:21.552 lat (usec): min=716, max=14233, avg=1124.00, stdev=342.10 00:17:21.552 clat percentiles (usec): 00:17:21.552 | 1.00th=[ 881], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1045], 00:17:21.552 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:17:21.552 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1156], 95.00th=[ 1188], 00:17:21.552 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1270], 99.95th=[ 1287], 00:17:21.552 | 99.99th=[ 1303] 00:17:21.552 bw ( KiB/s): min= 3544, max= 3624, per=44.94%, avg=3576.00, stdev=29.39, samples=5 00:17:21.552 iops : min= 886, max= 906, avg=894.00, stdev= 7.35, samples=5 00:17:21.552 lat (usec) : 750=0.15%, 1000=8.38% 00:17:21.552 lat (msec) : 2=91.44% 00:17:21.552 cpu : usr=1.51%, sys=3.54%, ctx=2702, majf=0, minf=1 00:17:21.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.552 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=670830: Mon Jul 15 13:02:43 2024 00:17:21.552 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(264KiB/2735msec) 00:17:21.552 slat (usec): min=25, max=9678, avg=170.34, stdev=1179.25 00:17:21.552 clat (usec): min=912, max=43002, avg=41245.55, stdev=5076.56 00:17:21.552 lat (usec): min=952, max=51982, avg=41418.07, stdev=5242.15 00:17:21.552 clat percentiles (usec): 00:17:21.552 | 1.00th=[ 914], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:21.552 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:17:21.552 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:17:21.552 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:21.552 | 99.99th=[43254] 00:17:21.552 bw ( KiB/s): min= 96, max= 96, per=1.21%, avg=96.00, stdev= 0.00, samples=5 00:17:21.552 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:21.552 lat (usec) : 1000=1.49% 00:17:21.552 lat (msec) : 50=97.01% 00:17:21.552 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=1 00:17:21.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.552 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.552 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=670837: Mon Jul 15 13:02:43 2024 00:17:21.552 read: IOPS=870, BW=3482KiB/s (3565kB/s)(8892KiB/2554msec) 00:17:21.552 slat (nsec): min=7120, max=59580, avg=24763.37, stdev=3453.79 00:17:21.552 clat (usec): min=833, max=1332, avg=1116.97, stdev=65.24 00:17:21.552 lat (usec): min=857, max=1356, avg=1141.73, stdev=65.24 00:17:21.552 clat percentiles (usec): 00:17:21.553 | 1.00th=[ 922], 5.00th=[ 1004], 10.00th=[ 1037], 20.00th=[ 1074], 00:17:21.553 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:17:21.553 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1221], 00:17:21.553 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1336], 00:17:21.553 | 99.99th=[ 1336] 00:17:21.553 bw ( KiB/s): min= 3472, max= 3504, per=43.83%, avg=3488.00, stdev=12.65, samples=5 00:17:21.553 iops : min= 868, max= 876, avg=872.00, stdev= 3.16, samples=5 00:17:21.553 lat (usec) : 1000=5.17% 00:17:21.553 lat (msec) : 2=94.78% 00:17:21.553 cpu : usr=1.06%, sys=2.43%, ctx=2224, majf=0, minf=2 00:17:21.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.553 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.553 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.553 00:17:21.553 Run status group 0 (all jobs): 00:17:21.553 READ: bw=7958KiB/s (8149kB/s), 96.5KiB/s-3537KiB/s (98.8kB/s-3622kB/s), io=23.7MiB (24.9MB), run=2554-3050msec 00:17:21.553 00:17:21.553 Disk stats (read/write): 00:17:21.553 nvme0n1: ios=1114/0, merge=0/0, ticks=2828/0, in_queue=2828, util=95.63% 00:17:21.553 nvme0n2: ios=2515/0, merge=0/0, ticks=2479/0, in_queue=2479, util=94.53% 00:17:21.553 nvme0n3: ios=62/0, merge=0/0, ticks=2557/0, in_queue=2557, util=95.99% 00:17:21.553 nvme0n4: ios=2034/0, merge=0/0, ticks=2191/0, in_queue=2191, util=96.05% 00:17:21.553 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.553 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:21.814 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:21.814 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:22.074 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.074 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:22.074 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:22.074 13:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 670468 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:22.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:22.334 nvmf hotplug test: fio failed as expected 00:17:22.334 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.594 rmmod nvme_tcp 00:17:22.594 rmmod nvme_fabrics 00:17:22.594 rmmod nvme_keyring 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 666977 ']' 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 666977 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 666977 ']' 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 666977 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.594 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 666977 00:17:22.855 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:22.855 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:22.855 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 666977' 00:17:22.855 killing process with pid 666977 00:17:22.855 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 666977 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 666977 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.856 13:02:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.412 13:02:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.412 00:17:25.412 real 0m29.043s 00:17:25.412 user 2m23.955s 00:17:25.412 sys 0m9.607s 00:17:25.412 13:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.412 13:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.412 ************************************ 00:17:25.412 END TEST nvmf_fio_target 00:17:25.412 ************************************ 00:17:25.412 13:02:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.412 13:02:46 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:25.412 13:02:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.412 13:02:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.412 13:02:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:25.412 ************************************ 00:17:25.412 START TEST nvmf_bdevio 00:17:25.412 ************************************ 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:25.412 * Looking for test storage... 00:17:25.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:25.412 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:25.413 13:02:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:25.413 13:02:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:33.549 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:33.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.549 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:33.550 Found net devices under 0000:31:00.0: cvl_0_0 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:33.550 Found net devices under 0000:31:00.1: cvl_0_1 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.550 13:02:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.782 ms 00:17:33.550 00:17:33.550 --- 10.0.0.2 ping statistics --- 00:17:33.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.550 rtt min/avg/max/mdev = 0.782/0.782/0.782/0.000 ms 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:17:33.550 00:17:33.550 --- 10.0.0.1 ping statistics --- 00:17:33.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.550 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=676356 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 676356 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 676356 ']' 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.550 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:33.550 [2024-07-15 13:02:55.161685] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:17:33.550 [2024-07-15 13:02:55.161753] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.550 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.550 [2024-07-15 13:02:55.258250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.550 [2024-07-15 13:02:55.348654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.550 [2024-07-15 13:02:55.348717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.550 [2024-07-15 13:02:55.348725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.550 [2024-07-15 13:02:55.348732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.550 [2024-07-15 13:02:55.348740] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.550 [2024-07-15 13:02:55.348923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:33.550 [2024-07-15 13:02:55.349083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:33.550 [2024-07-15 13:02:55.349264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:33.550 [2024-07-15 13:02:55.349266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.496 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.496 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:34.496 13:02:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.496 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.496 13:02:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 [2024-07-15 13:02:56.008450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 Malloc0 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:34.496 [2024-07-15 13:02:56.073447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.496 { 00:17:34.496 "params": { 00:17:34.496 "name": "Nvme$subsystem", 00:17:34.496 "trtype": "$TEST_TRANSPORT", 00:17:34.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.496 "adrfam": "ipv4", 00:17:34.496 "trsvcid": "$NVMF_PORT", 00:17:34.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.496 "hdgst": ${hdgst:-false}, 00:17:34.496 "ddgst": ${ddgst:-false} 00:17:34.496 }, 00:17:34.496 "method": "bdev_nvme_attach_controller" 00:17:34.496 } 00:17:34.496 EOF 00:17:34.496 )") 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:34.496 13:02:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.496 "params": { 00:17:34.496 "name": "Nvme1", 00:17:34.496 "trtype": "tcp", 00:17:34.496 "traddr": "10.0.0.2", 00:17:34.496 "adrfam": "ipv4", 00:17:34.496 "trsvcid": "4420", 00:17:34.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:34.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.496 "hdgst": false, 00:17:34.496 "ddgst": false 00:17:34.496 }, 00:17:34.496 "method": "bdev_nvme_attach_controller" 00:17:34.496 }' 00:17:34.496 [2024-07-15 13:02:56.130158] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:17:34.496 [2024-07-15 13:02:56.130223] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676604 ] 00:17:34.496 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.496 [2024-07-15 13:02:56.204482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.496 [2024-07-15 13:02:56.280287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.496 [2024-07-15 13:02:56.280338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.496 [2024-07-15 13:02:56.280341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.757 I/O targets: 00:17:34.757 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:34.757 00:17:34.757 00:17:34.757 CUnit - A unit testing framework for C - Version 2.1-3 00:17:34.757 http://cunit.sourceforge.net/ 00:17:34.757 00:17:34.757 00:17:34.757 Suite: bdevio tests on: Nvme1n1 00:17:35.018 Test: blockdev write read block ...passed 00:17:35.018 Test: blockdev write zeroes read block ...passed 00:17:35.018 Test: blockdev write zeroes read no split ...passed 00:17:35.018 Test: blockdev write zeroes read split ...passed 00:17:35.018 Test: blockdev write zeroes read split partial ...passed 00:17:35.018 Test: blockdev reset ...[2024-07-15 13:02:56.718642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:35.018 [2024-07-15 13:02:56.718704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14cd370 (9): Bad file descriptor 00:17:35.018 [2024-07-15 13:02:56.773422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:35.018 passed 00:17:35.018 Test: blockdev write read 8 blocks ...passed 00:17:35.018 Test: blockdev write read size > 128k ...passed 00:17:35.018 Test: blockdev write read invalid size ...passed 00:17:35.279 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.279 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.280 Test: blockdev write read max offset ...passed 00:17:35.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:35.280 Test: blockdev writev readv 8 blocks ...passed 00:17:35.280 Test: blockdev writev readv 30 x 1block ...passed 00:17:35.280 Test: blockdev writev readv block ...passed 00:17:35.280 Test: blockdev writev readv size > 128k ...passed 00:17:35.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:35.280 Test: blockdev comparev and writev ...[2024-07-15 13:02:57.031957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.031984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.031994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:35.280 [2024-07-15 13:02:57.032721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:35.280 [2024-07-15 13:02:57.032728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:35.280 passed 00:17:35.541 Test: blockdev nvme passthru rw ...passed 00:17:35.541 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:02:57.116602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.541 [2024-07-15 13:02:57.116613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:35.541 [2024-07-15 13:02:57.116709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.541 [2024-07-15 13:02:57.116717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:35.541 [2024-07-15 13:02:57.116816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.541 [2024-07-15 13:02:57.116824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:35.541 [2024-07-15 13:02:57.116916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:35.541 [2024-07-15 13:02:57.116925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:35.541 passed 00:17:35.541 Test: blockdev nvme admin passthru ...passed 00:17:35.541 Test: blockdev copy ...passed 00:17:35.541 00:17:35.541 Run Summary: Type Total Ran Passed Failed Inactive 00:17:35.541 suites 1 1 n/a 0 0 00:17:35.541 tests 23 23 23 0 0 00:17:35.542 asserts 152 152 152 0 n/a 00:17:35.542 00:17:35.542 Elapsed time = 1.258 seconds 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.542 rmmod nvme_tcp 00:17:35.542 rmmod nvme_fabrics 00:17:35.542 rmmod nvme_keyring 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 676356 ']' 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 676356 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 676356 ']' 00:17:35.542 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 676356 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 676356 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 676356' 00:17:35.803 killing process with pid 676356 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 676356 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 676356 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.803 13:02:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.352 13:02:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:38.352 00:17:38.352 real 0m12.885s 00:17:38.352 user 0m13.552s 00:17:38.352 sys 0m6.648s 00:17:38.352 13:02:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:38.352 13:02:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:38.352 ************************************ 00:17:38.352 END TEST nvmf_bdevio 00:17:38.352 ************************************ 00:17:38.352 13:02:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:38.352 13:02:59 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:38.352 13:02:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:38.352 13:02:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.352 13:02:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.352 ************************************ 00:17:38.352 START TEST nvmf_auth_target 00:17:38.352 ************************************ 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:38.352 * Looking for test storage... 00:17:38.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.352 13:02:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.353 13:02:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:46.499 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:46.499 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:46.499 Found net devices under 0000:31:00.0: cvl_0_0 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:46.499 Found net devices under 0000:31:00.1: cvl_0_1 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:17:46.499 00:17:46.499 --- 10.0.0.2 ping statistics --- 00:17:46.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.499 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:17:46.499 00:17:46.499 --- 10.0.0.1 ping statistics --- 00:17:46.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.499 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=681518 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 681518 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 681518 ']' 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.499 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.500 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.500 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.500 13:03:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=682073 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3769e7849e00c79c677d566a75a8711526865f55cb71bbe 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ysV 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3769e7849e00c79c677d566a75a8711526865f55cb71bbe 0 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3769e7849e00c79c677d566a75a8711526865f55cb71bbe 0 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3769e7849e00c79c677d566a75a8711526865f55cb71bbe 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ysV 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ysV 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ysV 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.072 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0093cf50e50aa04af0b078a8f5cce5e4b85abac6f94e782016516f2b061abb37 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.99i 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0093cf50e50aa04af0b078a8f5cce5e4b85abac6f94e782016516f2b061abb37 3 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0093cf50e50aa04af0b078a8f5cce5e4b85abac6f94e782016516f2b061abb37 3 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0093cf50e50aa04af0b078a8f5cce5e4b85abac6f94e782016516f2b061abb37 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.073 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.99i 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.99i 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.99i 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc10d697d759a49c51d1bdc37864e0fd 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TzV 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc10d697d759a49c51d1bdc37864e0fd 1 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc10d697d759a49c51d1bdc37864e0fd 1 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc10d697d759a49c51d1bdc37864e0fd 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TzV 00:17:47.334 13:03:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TzV 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.TzV 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9346c6b561fed5c86b152b506918316430ac2abf29dd01e0 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9ce 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9346c6b561fed5c86b152b506918316430ac2abf29dd01e0 2 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9346c6b561fed5c86b152b506918316430ac2abf29dd01e0 2 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9346c6b561fed5c86b152b506918316430ac2abf29dd01e0 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9ce 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9ce 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.9ce 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c5bfa1b8f1162d25487c556deb32b8a6eed5f5c5291bc5b1 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7jP 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c5bfa1b8f1162d25487c556deb32b8a6eed5f5c5291bc5b1 2 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c5bfa1b8f1162d25487c556deb32b8a6eed5f5c5291bc5b1 2 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.334 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c5bfa1b8f1162d25487c556deb32b8a6eed5f5c5291bc5b1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7jP 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7jP 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.7jP 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=64720cd99db8566afec16e2aba275d7e 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ch1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 64720cd99db8566afec16e2aba275d7e 1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 64720cd99db8566afec16e2aba275d7e 1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=64720cd99db8566afec16e2aba275d7e 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.335 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ch1 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ch1 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ch1 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.596 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7360ec81f1673cec785dcd9439992d258d3bf5a6f3f3be33af41339d685f6fda 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dqm 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7360ec81f1673cec785dcd9439992d258d3bf5a6f3f3be33af41339d685f6fda 3 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7360ec81f1673cec785dcd9439992d258d3bf5a6f3f3be33af41339d685f6fda 3 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7360ec81f1673cec785dcd9439992d258d3bf5a6f3f3be33af41339d685f6fda 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dqm 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dqm 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.dqm 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 681518 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 681518 ']' 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 682073 /var/tmp/host.sock 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 682073 ']' 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.597 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ysV 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ysV 00:17:47.858 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ysV 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.99i ]] 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.99i 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.99i 00:17:48.119 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.99i 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TzV 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.TzV 00:17:48.380 13:03:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.TzV 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.9ce ]] 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ce 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ce 00:17:48.380 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9ce 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7jP 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7jP 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7jP 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ch1 ]] 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ch1 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ch1 00:17:48.641 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ch1 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dqm 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dqm 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dqm 00:17:48.902 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.163 13:03:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.424 00:17:49.424 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.424 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.424 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.684 { 00:17:49.684 "cntlid": 1, 00:17:49.684 "qid": 0, 00:17:49.684 "state": "enabled", 00:17:49.684 "thread": "nvmf_tgt_poll_group_000", 00:17:49.684 "listen_address": { 00:17:49.684 "trtype": "TCP", 00:17:49.684 "adrfam": "IPv4", 00:17:49.684 "traddr": "10.0.0.2", 00:17:49.684 "trsvcid": "4420" 00:17:49.684 }, 00:17:49.684 "peer_address": { 00:17:49.684 "trtype": "TCP", 00:17:49.684 "adrfam": "IPv4", 00:17:49.684 "traddr": "10.0.0.1", 00:17:49.684 "trsvcid": "33270" 00:17:49.684 }, 00:17:49.684 "auth": { 00:17:49.684 "state": "completed", 00:17:49.684 "digest": "sha256", 00:17:49.684 "dhgroup": "null" 00:17:49.684 } 00:17:49.684 } 00:17:49.684 ]' 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.684 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.944 13:03:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.880 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.140 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.140 { 00:17:51.140 "cntlid": 3, 00:17:51.140 "qid": 0, 00:17:51.140 "state": "enabled", 00:17:51.140 "thread": "nvmf_tgt_poll_group_000", 00:17:51.140 "listen_address": { 00:17:51.140 "trtype": "TCP", 00:17:51.140 "adrfam": "IPv4", 00:17:51.140 "traddr": "10.0.0.2", 00:17:51.140 "trsvcid": "4420" 00:17:51.140 }, 00:17:51.140 "peer_address": { 00:17:51.140 "trtype": "TCP", 00:17:51.140 "adrfam": "IPv4", 00:17:51.140 "traddr": "10.0.0.1", 00:17:51.140 "trsvcid": "33306" 00:17:51.140 }, 00:17:51.140 "auth": { 00:17:51.140 "state": "completed", 00:17:51.140 "digest": "sha256", 00:17:51.140 "dhgroup": "null" 00:17:51.140 } 00:17:51.140 } 00:17:51.140 ]' 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.140 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.141 13:03:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.400 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.336 13:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.336 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.600 00:17:52.600 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.600 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.600 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.908 { 00:17:52.908 "cntlid": 5, 00:17:52.908 "qid": 0, 00:17:52.908 "state": "enabled", 00:17:52.908 "thread": "nvmf_tgt_poll_group_000", 00:17:52.908 "listen_address": { 00:17:52.908 "trtype": "TCP", 00:17:52.908 "adrfam": "IPv4", 00:17:52.908 "traddr": "10.0.0.2", 00:17:52.908 "trsvcid": "4420" 00:17:52.908 }, 00:17:52.908 "peer_address": { 00:17:52.908 "trtype": "TCP", 00:17:52.908 "adrfam": "IPv4", 00:17:52.908 "traddr": "10.0.0.1", 00:17:52.908 "trsvcid": "56424" 00:17:52.908 }, 00:17:52.908 "auth": { 00:17:52.908 "state": "completed", 00:17:52.908 "digest": "sha256", 00:17:52.908 "dhgroup": "null" 00:17:52.908 } 00:17:52.908 } 00:17:52.908 ]' 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.908 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.191 13:03:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:17:53.762 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.763 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.023 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.024 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.285 00:17:54.285 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.285 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.285 13:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.285 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.285 { 00:17:54.285 "cntlid": 7, 00:17:54.285 "qid": 0, 00:17:54.285 "state": "enabled", 00:17:54.285 "thread": "nvmf_tgt_poll_group_000", 00:17:54.285 "listen_address": { 00:17:54.285 "trtype": "TCP", 00:17:54.285 "adrfam": "IPv4", 00:17:54.285 "traddr": "10.0.0.2", 00:17:54.285 "trsvcid": "4420" 00:17:54.285 }, 00:17:54.285 "peer_address": { 00:17:54.285 "trtype": "TCP", 00:17:54.285 "adrfam": "IPv4", 00:17:54.285 "traddr": "10.0.0.1", 00:17:54.285 "trsvcid": "56442" 00:17:54.285 }, 00:17:54.285 "auth": { 00:17:54.285 "state": "completed", 00:17:54.285 "digest": "sha256", 00:17:54.285 "dhgroup": "null" 00:17:54.285 } 00:17:54.285 } 00:17:54.285 ]' 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.545 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.805 13:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.375 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.636 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.896 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.896 { 00:17:55.896 "cntlid": 9, 00:17:55.896 "qid": 0, 00:17:55.896 "state": "enabled", 00:17:55.896 "thread": "nvmf_tgt_poll_group_000", 00:17:55.896 "listen_address": { 00:17:55.896 "trtype": "TCP", 00:17:55.896 "adrfam": "IPv4", 00:17:55.896 "traddr": "10.0.0.2", 00:17:55.896 "trsvcid": "4420" 00:17:55.896 }, 00:17:55.896 "peer_address": { 00:17:55.896 "trtype": "TCP", 00:17:55.896 "adrfam": "IPv4", 00:17:55.896 "traddr": "10.0.0.1", 00:17:55.896 "trsvcid": "56466" 00:17:55.896 }, 00:17:55.896 "auth": { 00:17:55.896 "state": "completed", 00:17:55.896 "digest": "sha256", 00:17:55.896 "dhgroup": "ffdhe2048" 00:17:55.896 } 00:17:55.896 } 00:17:55.896 ]' 00:17:55.896 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.156 13:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:17:57.097 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.097 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.097 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.098 13:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.358 00:17:57.358 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.358 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.358 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.619 { 00:17:57.619 "cntlid": 11, 00:17:57.619 "qid": 0, 00:17:57.619 "state": "enabled", 00:17:57.619 "thread": "nvmf_tgt_poll_group_000", 00:17:57.619 "listen_address": { 00:17:57.619 "trtype": "TCP", 00:17:57.619 "adrfam": "IPv4", 00:17:57.619 "traddr": "10.0.0.2", 00:17:57.619 "trsvcid": "4420" 00:17:57.619 }, 00:17:57.619 "peer_address": { 00:17:57.619 "trtype": "TCP", 00:17:57.619 "adrfam": "IPv4", 00:17:57.619 "traddr": "10.0.0.1", 00:17:57.619 "trsvcid": "56490" 00:17:57.619 }, 00:17:57.619 "auth": { 00:17:57.619 "state": "completed", 00:17:57.619 "digest": "sha256", 00:17:57.619 "dhgroup": "ffdhe2048" 00:17:57.619 } 00:17:57.619 } 00:17:57.619 ]' 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.619 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.879 13:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.542 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.803 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.804 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.804 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.804 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.804 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.804 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.064 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.064 { 00:17:59.064 "cntlid": 13, 00:17:59.064 "qid": 0, 00:17:59.064 "state": "enabled", 00:17:59.064 "thread": "nvmf_tgt_poll_group_000", 00:17:59.064 "listen_address": { 00:17:59.064 "trtype": "TCP", 00:17:59.064 "adrfam": "IPv4", 00:17:59.064 "traddr": "10.0.0.2", 00:17:59.064 "trsvcid": "4420" 00:17:59.064 }, 00:17:59.064 "peer_address": { 00:17:59.064 "trtype": "TCP", 00:17:59.064 "adrfam": "IPv4", 00:17:59.064 "traddr": "10.0.0.1", 00:17:59.064 "trsvcid": "56520" 00:17:59.064 }, 00:17:59.064 "auth": { 00:17:59.064 "state": "completed", 00:17:59.064 "digest": "sha256", 00:17:59.064 "dhgroup": "ffdhe2048" 00:17:59.064 } 00:17:59.064 } 00:17:59.064 ]' 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.064 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.325 13:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.325 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.267 13:03:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.267 13:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.267 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.267 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.528 00:18:00.528 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.528 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.528 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.789 { 00:18:00.789 "cntlid": 15, 00:18:00.789 "qid": 0, 00:18:00.789 "state": "enabled", 00:18:00.789 "thread": "nvmf_tgt_poll_group_000", 00:18:00.789 "listen_address": { 00:18:00.789 "trtype": "TCP", 00:18:00.789 "adrfam": "IPv4", 00:18:00.789 "traddr": "10.0.0.2", 00:18:00.789 "trsvcid": "4420" 00:18:00.789 }, 00:18:00.789 "peer_address": { 00:18:00.789 "trtype": "TCP", 00:18:00.789 "adrfam": "IPv4", 00:18:00.789 "traddr": "10.0.0.1", 00:18:00.789 "trsvcid": "56556" 00:18:00.789 }, 00:18:00.789 "auth": { 00:18:00.789 "state": "completed", 00:18:00.789 "digest": "sha256", 00:18:00.789 "dhgroup": "ffdhe2048" 00:18:00.789 } 00:18:00.789 } 00:18:00.789 ]' 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.789 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.050 13:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.620 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.879 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.880 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.880 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.880 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.880 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.880 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.155 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.155 { 00:18:02.155 "cntlid": 17, 00:18:02.155 "qid": 0, 00:18:02.155 "state": "enabled", 00:18:02.155 "thread": "nvmf_tgt_poll_group_000", 00:18:02.155 "listen_address": { 00:18:02.155 "trtype": "TCP", 00:18:02.155 "adrfam": "IPv4", 00:18:02.155 "traddr": "10.0.0.2", 00:18:02.155 "trsvcid": "4420" 00:18:02.155 }, 00:18:02.155 "peer_address": { 00:18:02.155 "trtype": "TCP", 00:18:02.155 "adrfam": "IPv4", 00:18:02.155 "traddr": "10.0.0.1", 00:18:02.155 "trsvcid": "52818" 00:18:02.155 }, 00:18:02.155 "auth": { 00:18:02.155 "state": "completed", 00:18:02.155 "digest": "sha256", 00:18:02.155 "dhgroup": "ffdhe3072" 00:18:02.155 } 00:18:02.155 } 00:18:02.155 ]' 00:18:02.155 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.416 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.416 13:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.416 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.359 13:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.359 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.620 00:18:03.620 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.620 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.620 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.880 { 00:18:03.880 "cntlid": 19, 00:18:03.880 "qid": 0, 00:18:03.880 "state": "enabled", 00:18:03.880 "thread": "nvmf_tgt_poll_group_000", 00:18:03.880 "listen_address": { 00:18:03.880 "trtype": "TCP", 00:18:03.880 "adrfam": "IPv4", 00:18:03.880 "traddr": "10.0.0.2", 00:18:03.880 "trsvcid": "4420" 00:18:03.880 }, 00:18:03.880 "peer_address": { 00:18:03.880 "trtype": "TCP", 00:18:03.880 "adrfam": "IPv4", 00:18:03.880 "traddr": "10.0.0.1", 00:18:03.880 "trsvcid": "52848" 00:18:03.880 }, 00:18:03.880 "auth": { 00:18:03.880 "state": "completed", 00:18:03.880 "digest": "sha256", 00:18:03.880 "dhgroup": "ffdhe3072" 00:18:03.880 } 00:18:03.880 } 00:18:03.880 ]' 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.880 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.140 13:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.712 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.973 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.234 00:18:05.234 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.234 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.234 13:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.234 { 00:18:05.234 "cntlid": 21, 00:18:05.234 "qid": 0, 00:18:05.234 "state": "enabled", 00:18:05.234 "thread": "nvmf_tgt_poll_group_000", 00:18:05.234 "listen_address": { 00:18:05.234 "trtype": "TCP", 00:18:05.234 "adrfam": "IPv4", 00:18:05.234 "traddr": "10.0.0.2", 00:18:05.234 "trsvcid": "4420" 00:18:05.234 }, 00:18:05.234 "peer_address": { 00:18:05.234 "trtype": "TCP", 00:18:05.234 "adrfam": "IPv4", 00:18:05.234 "traddr": "10.0.0.1", 00:18:05.234 "trsvcid": "52878" 00:18:05.234 }, 00:18:05.234 "auth": { 00:18:05.234 "state": "completed", 00:18:05.234 "digest": "sha256", 00:18:05.234 "dhgroup": "ffdhe3072" 00:18:05.234 } 00:18:05.234 } 00:18:05.234 ]' 00:18:05.234 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.496 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.438 13:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.438 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.698 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.698 { 00:18:06.698 "cntlid": 23, 00:18:06.698 "qid": 0, 00:18:06.698 "state": "enabled", 00:18:06.698 "thread": "nvmf_tgt_poll_group_000", 00:18:06.698 "listen_address": { 00:18:06.698 "trtype": "TCP", 00:18:06.698 "adrfam": "IPv4", 00:18:06.698 "traddr": "10.0.0.2", 00:18:06.698 "trsvcid": "4420" 00:18:06.698 }, 00:18:06.698 "peer_address": { 00:18:06.698 "trtype": "TCP", 00:18:06.698 "adrfam": "IPv4", 00:18:06.698 "traddr": "10.0.0.1", 00:18:06.698 "trsvcid": "52902" 00:18:06.698 }, 00:18:06.698 "auth": { 00:18:06.698 "state": "completed", 00:18:06.698 "digest": "sha256", 00:18:06.698 "dhgroup": "ffdhe3072" 00:18:06.698 } 00:18:06.698 } 00:18:06.698 ]' 00:18:06.698 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.958 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.218 13:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:07.788 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.789 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.049 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.050 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.309 00:18:08.309 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.309 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.309 13:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.309 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.309 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.309 13:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.309 13:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.569 { 00:18:08.569 "cntlid": 25, 00:18:08.569 "qid": 0, 00:18:08.569 "state": "enabled", 00:18:08.569 "thread": "nvmf_tgt_poll_group_000", 00:18:08.569 "listen_address": { 00:18:08.569 "trtype": "TCP", 00:18:08.569 "adrfam": "IPv4", 00:18:08.569 "traddr": "10.0.0.2", 00:18:08.569 "trsvcid": "4420" 00:18:08.569 }, 00:18:08.569 "peer_address": { 00:18:08.569 "trtype": "TCP", 00:18:08.569 "adrfam": "IPv4", 00:18:08.569 "traddr": "10.0.0.1", 00:18:08.569 "trsvcid": "52934" 00:18:08.569 }, 00:18:08.569 "auth": { 00:18:08.569 "state": "completed", 00:18:08.569 "digest": "sha256", 00:18:08.569 "dhgroup": "ffdhe4096" 00:18:08.569 } 00:18:08.569 } 00:18:08.569 ]' 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.569 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.830 13:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.399 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.658 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.658 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.918 { 00:18:09.918 "cntlid": 27, 00:18:09.918 "qid": 0, 00:18:09.918 "state": "enabled", 00:18:09.918 "thread": "nvmf_tgt_poll_group_000", 00:18:09.918 "listen_address": { 00:18:09.918 "trtype": "TCP", 00:18:09.918 "adrfam": "IPv4", 00:18:09.918 "traddr": "10.0.0.2", 00:18:09.918 "trsvcid": "4420" 00:18:09.918 }, 00:18:09.918 "peer_address": { 00:18:09.918 "trtype": "TCP", 00:18:09.918 "adrfam": "IPv4", 00:18:09.918 "traddr": "10.0.0.1", 00:18:09.918 "trsvcid": "52962" 00:18:09.918 }, 00:18:09.918 "auth": { 00:18:09.918 "state": "completed", 00:18:09.918 "digest": "sha256", 00:18:09.918 "dhgroup": "ffdhe4096" 00:18:09.918 } 00:18:09.918 } 00:18:09.918 ]' 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.918 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.178 13:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.119 13:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.379 00:18:11.380 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.380 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.380 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.641 { 00:18:11.641 "cntlid": 29, 00:18:11.641 "qid": 0, 00:18:11.641 "state": "enabled", 00:18:11.641 "thread": "nvmf_tgt_poll_group_000", 00:18:11.641 "listen_address": { 00:18:11.641 "trtype": "TCP", 00:18:11.641 "adrfam": "IPv4", 00:18:11.641 "traddr": "10.0.0.2", 00:18:11.641 "trsvcid": "4420" 00:18:11.641 }, 00:18:11.641 "peer_address": { 00:18:11.641 "trtype": "TCP", 00:18:11.641 "adrfam": "IPv4", 00:18:11.641 "traddr": "10.0.0.1", 00:18:11.641 "trsvcid": "52990" 00:18:11.641 }, 00:18:11.641 "auth": { 00:18:11.641 "state": "completed", 00:18:11.641 "digest": "sha256", 00:18:11.641 "dhgroup": "ffdhe4096" 00:18:11.641 } 00:18:11.641 } 00:18:11.641 ]' 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.641 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.902 13:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:12.472 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.472 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.472 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.472 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.734 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.995 00:18:12.995 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.995 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.995 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.257 { 00:18:13.257 "cntlid": 31, 00:18:13.257 "qid": 0, 00:18:13.257 "state": "enabled", 00:18:13.257 "thread": "nvmf_tgt_poll_group_000", 00:18:13.257 "listen_address": { 00:18:13.257 "trtype": "TCP", 00:18:13.257 "adrfam": "IPv4", 00:18:13.257 "traddr": "10.0.0.2", 00:18:13.257 "trsvcid": "4420" 00:18:13.257 }, 00:18:13.257 "peer_address": { 00:18:13.257 "trtype": "TCP", 00:18:13.257 "adrfam": "IPv4", 00:18:13.257 "traddr": "10.0.0.1", 00:18:13.257 "trsvcid": "45836" 00:18:13.257 }, 00:18:13.257 "auth": { 00:18:13.257 "state": "completed", 00:18:13.257 "digest": "sha256", 00:18:13.257 "dhgroup": "ffdhe4096" 00:18:13.257 } 00:18:13.257 } 00:18:13.257 ]' 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.257 13:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.257 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.257 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.257 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.518 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.090 13:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.352 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.613 00:18:14.613 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.613 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.613 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.873 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.874 { 00:18:14.874 "cntlid": 33, 00:18:14.874 "qid": 0, 00:18:14.874 "state": "enabled", 00:18:14.874 "thread": "nvmf_tgt_poll_group_000", 00:18:14.874 "listen_address": { 00:18:14.874 "trtype": "TCP", 00:18:14.874 "adrfam": "IPv4", 00:18:14.874 "traddr": "10.0.0.2", 00:18:14.874 "trsvcid": "4420" 00:18:14.874 }, 00:18:14.874 "peer_address": { 00:18:14.874 "trtype": "TCP", 00:18:14.874 "adrfam": "IPv4", 00:18:14.874 "traddr": "10.0.0.1", 00:18:14.874 "trsvcid": "45870" 00:18:14.874 }, 00:18:14.874 "auth": { 00:18:14.874 "state": "completed", 00:18:14.874 "digest": "sha256", 00:18:14.874 "dhgroup": "ffdhe6144" 00:18:14.874 } 00:18:14.874 } 00:18:14.874 ]' 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.874 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.134 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.134 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.134 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.134 13:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.078 13:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.339 00:18:16.339 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.339 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.339 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.600 { 00:18:16.600 "cntlid": 35, 00:18:16.600 "qid": 0, 00:18:16.600 "state": "enabled", 00:18:16.600 "thread": "nvmf_tgt_poll_group_000", 00:18:16.600 "listen_address": { 00:18:16.600 "trtype": "TCP", 00:18:16.600 "adrfam": "IPv4", 00:18:16.600 "traddr": "10.0.0.2", 00:18:16.600 "trsvcid": "4420" 00:18:16.600 }, 00:18:16.600 "peer_address": { 00:18:16.600 "trtype": "TCP", 00:18:16.600 "adrfam": "IPv4", 00:18:16.600 "traddr": "10.0.0.1", 00:18:16.600 "trsvcid": "45898" 00:18:16.600 }, 00:18:16.600 "auth": { 00:18:16.600 "state": "completed", 00:18:16.600 "digest": "sha256", 00:18:16.600 "dhgroup": "ffdhe6144" 00:18:16.600 } 00:18:16.600 } 00:18:16.600 ]' 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.600 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.601 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.601 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.601 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.601 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.601 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.861 13:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.432 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.433 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.694 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.955 00:18:17.955 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.955 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.955 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.215 { 00:18:18.215 "cntlid": 37, 00:18:18.215 "qid": 0, 00:18:18.215 "state": "enabled", 00:18:18.215 "thread": "nvmf_tgt_poll_group_000", 00:18:18.215 "listen_address": { 00:18:18.215 "trtype": "TCP", 00:18:18.215 "adrfam": "IPv4", 00:18:18.215 "traddr": "10.0.0.2", 00:18:18.215 "trsvcid": "4420" 00:18:18.215 }, 00:18:18.215 "peer_address": { 00:18:18.215 "trtype": "TCP", 00:18:18.215 "adrfam": "IPv4", 00:18:18.215 "traddr": "10.0.0.1", 00:18:18.215 "trsvcid": "45922" 00:18:18.215 }, 00:18:18.215 "auth": { 00:18:18.215 "state": "completed", 00:18:18.215 "digest": "sha256", 00:18:18.215 "dhgroup": "ffdhe6144" 00:18:18.215 } 00:18:18.215 } 00:18:18.215 ]' 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.215 13:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.215 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.215 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.475 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.475 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.475 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.476 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.418 13:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.418 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.679 00:18:19.679 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.679 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.679 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.940 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.940 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.940 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.940 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.941 { 00:18:19.941 "cntlid": 39, 00:18:19.941 "qid": 0, 00:18:19.941 "state": "enabled", 00:18:19.941 "thread": "nvmf_tgt_poll_group_000", 00:18:19.941 "listen_address": { 00:18:19.941 "trtype": "TCP", 00:18:19.941 "adrfam": "IPv4", 00:18:19.941 "traddr": "10.0.0.2", 00:18:19.941 "trsvcid": "4420" 00:18:19.941 }, 00:18:19.941 "peer_address": { 00:18:19.941 "trtype": "TCP", 00:18:19.941 "adrfam": "IPv4", 00:18:19.941 "traddr": "10.0.0.1", 00:18:19.941 "trsvcid": "45956" 00:18:19.941 }, 00:18:19.941 "auth": { 00:18:19.941 "state": "completed", 00:18:19.941 "digest": "sha256", 00:18:19.941 "dhgroup": "ffdhe6144" 00:18:19.941 } 00:18:19.941 } 00:18:19.941 ]' 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.941 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.202 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.202 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.202 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.202 13:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.145 13:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.716 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.716 { 00:18:21.716 "cntlid": 41, 00:18:21.716 "qid": 0, 00:18:21.716 "state": "enabled", 00:18:21.716 "thread": "nvmf_tgt_poll_group_000", 00:18:21.716 "listen_address": { 00:18:21.716 "trtype": "TCP", 00:18:21.716 "adrfam": "IPv4", 00:18:21.716 "traddr": "10.0.0.2", 00:18:21.716 "trsvcid": "4420" 00:18:21.716 }, 00:18:21.716 "peer_address": { 00:18:21.716 "trtype": "TCP", 00:18:21.716 "adrfam": "IPv4", 00:18:21.716 "traddr": "10.0.0.1", 00:18:21.716 "trsvcid": "45984" 00:18:21.716 }, 00:18:21.716 "auth": { 00:18:21.716 "state": "completed", 00:18:21.716 "digest": "sha256", 00:18:21.716 "dhgroup": "ffdhe8192" 00:18:21.716 } 00:18:21.716 } 00:18:21.716 ]' 00:18:21.716 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.976 13:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.918 13:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.490 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.490 { 00:18:23.490 "cntlid": 43, 00:18:23.490 "qid": 0, 00:18:23.490 "state": "enabled", 00:18:23.490 "thread": "nvmf_tgt_poll_group_000", 00:18:23.490 "listen_address": { 00:18:23.490 "trtype": "TCP", 00:18:23.490 "adrfam": "IPv4", 00:18:23.490 "traddr": "10.0.0.2", 00:18:23.490 "trsvcid": "4420" 00:18:23.490 }, 00:18:23.490 "peer_address": { 00:18:23.490 "trtype": "TCP", 00:18:23.490 "adrfam": "IPv4", 00:18:23.490 "traddr": "10.0.0.1", 00:18:23.490 "trsvcid": "57144" 00:18:23.490 }, 00:18:23.490 "auth": { 00:18:23.490 "state": "completed", 00:18:23.490 "digest": "sha256", 00:18:23.490 "dhgroup": "ffdhe8192" 00:18:23.490 } 00:18:23.490 } 00:18:23.490 ]' 00:18:23.490 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.751 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.012 13:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.583 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.843 13:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.414 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.414 13:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.415 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.415 { 00:18:25.415 "cntlid": 45, 00:18:25.415 "qid": 0, 00:18:25.415 "state": "enabled", 00:18:25.415 "thread": "nvmf_tgt_poll_group_000", 00:18:25.415 "listen_address": { 00:18:25.415 "trtype": "TCP", 00:18:25.415 "adrfam": "IPv4", 00:18:25.415 "traddr": "10.0.0.2", 00:18:25.415 "trsvcid": "4420" 00:18:25.415 }, 00:18:25.415 "peer_address": { 00:18:25.415 "trtype": "TCP", 00:18:25.415 "adrfam": "IPv4", 00:18:25.415 "traddr": "10.0.0.1", 00:18:25.415 "trsvcid": "57174" 00:18:25.415 }, 00:18:25.415 "auth": { 00:18:25.415 "state": "completed", 00:18:25.415 "digest": "sha256", 00:18:25.415 "dhgroup": "ffdhe8192" 00:18:25.415 } 00:18:25.415 } 00:18:25.415 ]' 00:18:25.415 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.676 13:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.618 13:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.619 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.619 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.190 00:18:27.190 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.190 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.190 13:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.451 { 00:18:27.451 "cntlid": 47, 00:18:27.451 "qid": 0, 00:18:27.451 "state": "enabled", 00:18:27.451 "thread": "nvmf_tgt_poll_group_000", 00:18:27.451 "listen_address": { 00:18:27.451 "trtype": "TCP", 00:18:27.451 "adrfam": "IPv4", 00:18:27.451 "traddr": "10.0.0.2", 00:18:27.451 "trsvcid": "4420" 00:18:27.451 }, 00:18:27.451 "peer_address": { 00:18:27.451 "trtype": "TCP", 00:18:27.451 "adrfam": "IPv4", 00:18:27.451 "traddr": "10.0.0.1", 00:18:27.451 "trsvcid": "57200" 00:18:27.451 }, 00:18:27.451 "auth": { 00:18:27.451 "state": "completed", 00:18:27.451 "digest": "sha256", 00:18:27.451 "dhgroup": "ffdhe8192" 00:18:27.451 } 00:18:27.451 } 00:18:27.451 ]' 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.451 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.712 13:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.283 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.543 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.803 00:18:28.804 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.804 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.804 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.064 { 00:18:29.064 "cntlid": 49, 00:18:29.064 "qid": 0, 00:18:29.064 "state": "enabled", 00:18:29.064 "thread": "nvmf_tgt_poll_group_000", 00:18:29.064 "listen_address": { 00:18:29.064 "trtype": "TCP", 00:18:29.064 "adrfam": "IPv4", 00:18:29.064 "traddr": "10.0.0.2", 00:18:29.064 "trsvcid": "4420" 00:18:29.064 }, 00:18:29.064 "peer_address": { 00:18:29.064 "trtype": "TCP", 00:18:29.064 "adrfam": "IPv4", 00:18:29.064 "traddr": "10.0.0.1", 00:18:29.064 "trsvcid": "57216" 00:18:29.064 }, 00:18:29.064 "auth": { 00:18:29.064 "state": "completed", 00:18:29.064 "digest": "sha384", 00:18:29.064 "dhgroup": "null" 00:18:29.064 } 00:18:29.064 } 00:18:29.064 ]' 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.064 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.324 13:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.895 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.155 13:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.414 00:18:30.414 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.414 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.414 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.414 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.673 { 00:18:30.673 "cntlid": 51, 00:18:30.673 "qid": 0, 00:18:30.673 "state": "enabled", 00:18:30.673 "thread": "nvmf_tgt_poll_group_000", 00:18:30.673 "listen_address": { 00:18:30.673 "trtype": "TCP", 00:18:30.673 "adrfam": "IPv4", 00:18:30.673 "traddr": "10.0.0.2", 00:18:30.673 "trsvcid": "4420" 00:18:30.673 }, 00:18:30.673 "peer_address": { 00:18:30.673 "trtype": "TCP", 00:18:30.673 "adrfam": "IPv4", 00:18:30.673 "traddr": "10.0.0.1", 00:18:30.673 "trsvcid": "57242" 00:18:30.673 }, 00:18:30.673 "auth": { 00:18:30.673 "state": "completed", 00:18:30.673 "digest": "sha384", 00:18:30.673 "dhgroup": "null" 00:18:30.673 } 00:18:30.673 } 00:18:30.673 ]' 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.673 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.934 13:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.505 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.506 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.506 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.766 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.027 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.027 { 00:18:32.027 "cntlid": 53, 00:18:32.027 "qid": 0, 00:18:32.027 "state": "enabled", 00:18:32.027 "thread": "nvmf_tgt_poll_group_000", 00:18:32.027 "listen_address": { 00:18:32.027 "trtype": "TCP", 00:18:32.027 "adrfam": "IPv4", 00:18:32.027 "traddr": "10.0.0.2", 00:18:32.027 "trsvcid": "4420" 00:18:32.027 }, 00:18:32.027 "peer_address": { 00:18:32.027 "trtype": "TCP", 00:18:32.027 "adrfam": "IPv4", 00:18:32.027 "traddr": "10.0.0.1", 00:18:32.027 "trsvcid": "54956" 00:18:32.027 }, 00:18:32.027 "auth": { 00:18:32.027 "state": "completed", 00:18:32.027 "digest": "sha384", 00:18:32.027 "dhgroup": "null" 00:18:32.027 } 00:18:32.027 } 00:18:32.027 ]' 00:18:32.027 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.287 13:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.548 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.119 13:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.379 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.640 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.640 { 00:18:33.640 "cntlid": 55, 00:18:33.640 "qid": 0, 00:18:33.640 "state": "enabled", 00:18:33.640 "thread": "nvmf_tgt_poll_group_000", 00:18:33.640 "listen_address": { 00:18:33.640 "trtype": "TCP", 00:18:33.640 "adrfam": "IPv4", 00:18:33.640 "traddr": "10.0.0.2", 00:18:33.640 "trsvcid": "4420" 00:18:33.640 }, 00:18:33.640 "peer_address": { 00:18:33.640 "trtype": "TCP", 00:18:33.640 "adrfam": "IPv4", 00:18:33.640 "traddr": "10.0.0.1", 00:18:33.640 "trsvcid": "54984" 00:18:33.640 }, 00:18:33.640 "auth": { 00:18:33.640 "state": "completed", 00:18:33.640 "digest": "sha384", 00:18:33.640 "dhgroup": "null" 00:18:33.640 } 00:18:33.640 } 00:18:33.640 ]' 00:18:33.640 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.901 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.161 13:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.730 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.990 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.249 00:18:35.249 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.249 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.249 13:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.249 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.249 { 00:18:35.249 "cntlid": 57, 00:18:35.249 "qid": 0, 00:18:35.249 "state": "enabled", 00:18:35.249 "thread": "nvmf_tgt_poll_group_000", 00:18:35.249 "listen_address": { 00:18:35.249 "trtype": "TCP", 00:18:35.249 "adrfam": "IPv4", 00:18:35.249 "traddr": "10.0.0.2", 00:18:35.249 "trsvcid": "4420" 00:18:35.249 }, 00:18:35.249 "peer_address": { 00:18:35.249 "trtype": "TCP", 00:18:35.249 "adrfam": "IPv4", 00:18:35.249 "traddr": "10.0.0.1", 00:18:35.249 "trsvcid": "55002" 00:18:35.249 }, 00:18:35.249 "auth": { 00:18:35.250 "state": "completed", 00:18:35.250 "digest": "sha384", 00:18:35.250 "dhgroup": "ffdhe2048" 00:18:35.250 } 00:18:35.250 } 00:18:35.250 ]' 00:18:35.250 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.510 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.450 13:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.450 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.710 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.710 { 00:18:36.710 "cntlid": 59, 00:18:36.710 "qid": 0, 00:18:36.710 "state": "enabled", 00:18:36.710 "thread": "nvmf_tgt_poll_group_000", 00:18:36.710 "listen_address": { 00:18:36.710 "trtype": "TCP", 00:18:36.710 "adrfam": "IPv4", 00:18:36.710 "traddr": "10.0.0.2", 00:18:36.710 "trsvcid": "4420" 00:18:36.710 }, 00:18:36.710 "peer_address": { 00:18:36.710 "trtype": "TCP", 00:18:36.710 "adrfam": "IPv4", 00:18:36.710 "traddr": "10.0.0.1", 00:18:36.710 "trsvcid": "55034" 00:18:36.710 }, 00:18:36.710 "auth": { 00:18:36.710 "state": "completed", 00:18:36.710 "digest": "sha384", 00:18:36.710 "dhgroup": "ffdhe2048" 00:18:36.710 } 00:18:36.710 } 00:18:36.710 ]' 00:18:36.710 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.971 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.230 13:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.825 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:38.104 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.105 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.105 13:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.371 { 00:18:38.371 "cntlid": 61, 00:18:38.371 "qid": 0, 00:18:38.371 "state": "enabled", 00:18:38.371 "thread": "nvmf_tgt_poll_group_000", 00:18:38.371 "listen_address": { 00:18:38.371 "trtype": "TCP", 00:18:38.371 "adrfam": "IPv4", 00:18:38.371 "traddr": "10.0.0.2", 00:18:38.371 "trsvcid": "4420" 00:18:38.371 }, 00:18:38.371 "peer_address": { 00:18:38.371 "trtype": "TCP", 00:18:38.371 "adrfam": "IPv4", 00:18:38.371 "traddr": "10.0.0.1", 00:18:38.371 "trsvcid": "55056" 00:18:38.371 }, 00:18:38.371 "auth": { 00:18:38.371 "state": "completed", 00:18:38.371 "digest": "sha384", 00:18:38.371 "dhgroup": "ffdhe2048" 00:18:38.371 } 00:18:38.371 } 00:18:38.371 ]' 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.371 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.678 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.678 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.678 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.678 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.249 13:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.510 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.770 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.770 { 00:18:39.770 "cntlid": 63, 00:18:39.770 "qid": 0, 00:18:39.770 "state": "enabled", 00:18:39.770 "thread": "nvmf_tgt_poll_group_000", 00:18:39.770 "listen_address": { 00:18:39.770 "trtype": "TCP", 00:18:39.770 "adrfam": "IPv4", 00:18:39.770 "traddr": "10.0.0.2", 00:18:39.770 "trsvcid": "4420" 00:18:39.770 }, 00:18:39.770 "peer_address": { 00:18:39.770 "trtype": "TCP", 00:18:39.770 "adrfam": "IPv4", 00:18:39.770 "traddr": "10.0.0.1", 00:18:39.770 "trsvcid": "55098" 00:18:39.770 }, 00:18:39.770 "auth": { 00:18:39.770 "state": "completed", 00:18:39.770 "digest": "sha384", 00:18:39.770 "dhgroup": "ffdhe2048" 00:18:39.770 } 00:18:39.770 } 00:18:39.770 ]' 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.770 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.030 13:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.601 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.862 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.122 00:18:41.122 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.122 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.122 13:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.383 { 00:18:41.383 "cntlid": 65, 00:18:41.383 "qid": 0, 00:18:41.383 "state": "enabled", 00:18:41.383 "thread": "nvmf_tgt_poll_group_000", 00:18:41.383 "listen_address": { 00:18:41.383 "trtype": "TCP", 00:18:41.383 "adrfam": "IPv4", 00:18:41.383 "traddr": "10.0.0.2", 00:18:41.383 "trsvcid": "4420" 00:18:41.383 }, 00:18:41.383 "peer_address": { 00:18:41.383 "trtype": "TCP", 00:18:41.383 "adrfam": "IPv4", 00:18:41.383 "traddr": "10.0.0.1", 00:18:41.383 "trsvcid": "55110" 00:18:41.383 }, 00:18:41.383 "auth": { 00:18:41.383 "state": "completed", 00:18:41.383 "digest": "sha384", 00:18:41.383 "dhgroup": "ffdhe3072" 00:18:41.383 } 00:18:41.383 } 00:18:41.383 ]' 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.383 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.643 13:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:42.212 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.212 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.212 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.212 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.473 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.733 00:18:42.733 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.733 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.733 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.994 { 00:18:42.994 "cntlid": 67, 00:18:42.994 "qid": 0, 00:18:42.994 "state": "enabled", 00:18:42.994 "thread": "nvmf_tgt_poll_group_000", 00:18:42.994 "listen_address": { 00:18:42.994 "trtype": "TCP", 00:18:42.994 "adrfam": "IPv4", 00:18:42.994 "traddr": "10.0.0.2", 00:18:42.994 "trsvcid": "4420" 00:18:42.994 }, 00:18:42.994 "peer_address": { 00:18:42.994 "trtype": "TCP", 00:18:42.994 "adrfam": "IPv4", 00:18:42.994 "traddr": "10.0.0.1", 00:18:42.994 "trsvcid": "38692" 00:18:42.994 }, 00:18:42.994 "auth": { 00:18:42.994 "state": "completed", 00:18:42.994 "digest": "sha384", 00:18:42.994 "dhgroup": "ffdhe3072" 00:18:42.994 } 00:18:42.994 } 00:18:42.994 ]' 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.994 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.254 13:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.833 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.094 13:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.354 00:18:44.354 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.354 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.354 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.615 { 00:18:44.615 "cntlid": 69, 00:18:44.615 "qid": 0, 00:18:44.615 "state": "enabled", 00:18:44.615 "thread": "nvmf_tgt_poll_group_000", 00:18:44.615 "listen_address": { 00:18:44.615 "trtype": "TCP", 00:18:44.615 "adrfam": "IPv4", 00:18:44.615 "traddr": "10.0.0.2", 00:18:44.615 "trsvcid": "4420" 00:18:44.615 }, 00:18:44.615 "peer_address": { 00:18:44.615 "trtype": "TCP", 00:18:44.615 "adrfam": "IPv4", 00:18:44.615 "traddr": "10.0.0.1", 00:18:44.615 "trsvcid": "38730" 00:18:44.615 }, 00:18:44.615 "auth": { 00:18:44.615 "state": "completed", 00:18:44.615 "digest": "sha384", 00:18:44.615 "dhgroup": "ffdhe3072" 00:18:44.615 } 00:18:44.615 } 00:18:44.615 ]' 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.615 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.874 13:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.479 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.480 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.740 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.740 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.000 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.000 { 00:18:46.000 "cntlid": 71, 00:18:46.000 "qid": 0, 00:18:46.000 "state": "enabled", 00:18:46.000 "thread": "nvmf_tgt_poll_group_000", 00:18:46.000 "listen_address": { 00:18:46.000 "trtype": "TCP", 00:18:46.000 "adrfam": "IPv4", 00:18:46.001 "traddr": "10.0.0.2", 00:18:46.001 "trsvcid": "4420" 00:18:46.001 }, 00:18:46.001 "peer_address": { 00:18:46.001 "trtype": "TCP", 00:18:46.001 "adrfam": "IPv4", 00:18:46.001 "traddr": "10.0.0.1", 00:18:46.001 "trsvcid": "38744" 00:18:46.001 }, 00:18:46.001 "auth": { 00:18:46.001 "state": "completed", 00:18:46.001 "digest": "sha384", 00:18:46.001 "dhgroup": "ffdhe3072" 00:18:46.001 } 00:18:46.001 } 00:18:46.001 ]' 00:18:46.001 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.001 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.001 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.260 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.260 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.260 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.260 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.260 13:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.260 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:46.829 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.829 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:46.830 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.090 13:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.350 00:18:47.350 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.350 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.350 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.611 { 00:18:47.611 "cntlid": 73, 00:18:47.611 "qid": 0, 00:18:47.611 "state": "enabled", 00:18:47.611 "thread": "nvmf_tgt_poll_group_000", 00:18:47.611 "listen_address": { 00:18:47.611 "trtype": "TCP", 00:18:47.611 "adrfam": "IPv4", 00:18:47.611 "traddr": "10.0.0.2", 00:18:47.611 "trsvcid": "4420" 00:18:47.611 }, 00:18:47.611 "peer_address": { 00:18:47.611 "trtype": "TCP", 00:18:47.611 "adrfam": "IPv4", 00:18:47.611 "traddr": "10.0.0.1", 00:18:47.611 "trsvcid": "38772" 00:18:47.611 }, 00:18:47.611 "auth": { 00:18:47.611 "state": "completed", 00:18:47.611 "digest": "sha384", 00:18:47.611 "dhgroup": "ffdhe4096" 00:18:47.611 } 00:18:47.611 } 00:18:47.611 ]' 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.611 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.872 13:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.444 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.704 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.964 00:18:48.964 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.964 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.965 { 00:18:48.965 "cntlid": 75, 00:18:48.965 "qid": 0, 00:18:48.965 "state": "enabled", 00:18:48.965 "thread": "nvmf_tgt_poll_group_000", 00:18:48.965 "listen_address": { 00:18:48.965 "trtype": "TCP", 00:18:48.965 "adrfam": "IPv4", 00:18:48.965 "traddr": "10.0.0.2", 00:18:48.965 "trsvcid": "4420" 00:18:48.965 }, 00:18:48.965 "peer_address": { 00:18:48.965 "trtype": "TCP", 00:18:48.965 "adrfam": "IPv4", 00:18:48.965 "traddr": "10.0.0.1", 00:18:48.965 "trsvcid": "38812" 00:18:48.965 }, 00:18:48.965 "auth": { 00:18:48.965 "state": "completed", 00:18:48.965 "digest": "sha384", 00:18:48.965 "dhgroup": "ffdhe4096" 00:18:48.965 } 00:18:48.965 } 00:18:48.965 ]' 00:18:48.965 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.225 13:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.225 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.166 13:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.426 00:18:50.426 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.426 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.426 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.687 { 00:18:50.687 "cntlid": 77, 00:18:50.687 "qid": 0, 00:18:50.687 "state": "enabled", 00:18:50.687 "thread": "nvmf_tgt_poll_group_000", 00:18:50.687 "listen_address": { 00:18:50.687 "trtype": "TCP", 00:18:50.687 "adrfam": "IPv4", 00:18:50.687 "traddr": "10.0.0.2", 00:18:50.687 "trsvcid": "4420" 00:18:50.687 }, 00:18:50.687 "peer_address": { 00:18:50.687 "trtype": "TCP", 00:18:50.687 "adrfam": "IPv4", 00:18:50.687 "traddr": "10.0.0.1", 00:18:50.687 "trsvcid": "38838" 00:18:50.687 }, 00:18:50.687 "auth": { 00:18:50.687 "state": "completed", 00:18:50.687 "digest": "sha384", 00:18:50.687 "dhgroup": "ffdhe4096" 00:18:50.687 } 00:18:50.687 } 00:18:50.687 ]' 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.687 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.948 13:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:51.518 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.519 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.779 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.040 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.040 { 00:18:52.040 "cntlid": 79, 00:18:52.040 "qid": 0, 00:18:52.040 "state": "enabled", 00:18:52.040 "thread": "nvmf_tgt_poll_group_000", 00:18:52.040 "listen_address": { 00:18:52.040 "trtype": "TCP", 00:18:52.040 "adrfam": "IPv4", 00:18:52.040 "traddr": "10.0.0.2", 00:18:52.040 "trsvcid": "4420" 00:18:52.040 }, 00:18:52.040 "peer_address": { 00:18:52.040 "trtype": "TCP", 00:18:52.040 "adrfam": "IPv4", 00:18:52.040 "traddr": "10.0.0.1", 00:18:52.040 "trsvcid": "56774" 00:18:52.040 }, 00:18:52.040 "auth": { 00:18:52.040 "state": "completed", 00:18:52.040 "digest": "sha384", 00:18:52.040 "dhgroup": "ffdhe4096" 00:18:52.040 } 00:18:52.040 } 00:18:52.040 ]' 00:18:52.040 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.300 13:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.562 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.133 13:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.393 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.654 00:18:53.654 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.654 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.654 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.915 { 00:18:53.915 "cntlid": 81, 00:18:53.915 "qid": 0, 00:18:53.915 "state": "enabled", 00:18:53.915 "thread": "nvmf_tgt_poll_group_000", 00:18:53.915 "listen_address": { 00:18:53.915 "trtype": "TCP", 00:18:53.915 "adrfam": "IPv4", 00:18:53.915 "traddr": "10.0.0.2", 00:18:53.915 "trsvcid": "4420" 00:18:53.915 }, 00:18:53.915 "peer_address": { 00:18:53.915 "trtype": "TCP", 00:18:53.915 "adrfam": "IPv4", 00:18:53.915 "traddr": "10.0.0.1", 00:18:53.915 "trsvcid": "56802" 00:18:53.915 }, 00:18:53.915 "auth": { 00:18:53.915 "state": "completed", 00:18:53.915 "digest": "sha384", 00:18:53.915 "dhgroup": "ffdhe6144" 00:18:53.915 } 00:18:53.915 } 00:18:53.915 ]' 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.915 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.176 13:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:18:54.747 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.008 13:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.268 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.528 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.528 { 00:18:55.528 "cntlid": 83, 00:18:55.528 "qid": 0, 00:18:55.528 "state": "enabled", 00:18:55.528 "thread": "nvmf_tgt_poll_group_000", 00:18:55.528 "listen_address": { 00:18:55.528 "trtype": "TCP", 00:18:55.528 "adrfam": "IPv4", 00:18:55.529 "traddr": "10.0.0.2", 00:18:55.529 "trsvcid": "4420" 00:18:55.529 }, 00:18:55.529 "peer_address": { 00:18:55.529 "trtype": "TCP", 00:18:55.529 "adrfam": "IPv4", 00:18:55.529 "traddr": "10.0.0.1", 00:18:55.529 "trsvcid": "56840" 00:18:55.529 }, 00:18:55.529 "auth": { 00:18:55.529 "state": "completed", 00:18:55.529 "digest": "sha384", 00:18:55.529 "dhgroup": "ffdhe6144" 00:18:55.529 } 00:18:55.529 } 00:18:55.529 ]' 00:18:55.529 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.529 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.529 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.789 13:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:18:56.728 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.729 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.300 00:18:57.300 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.300 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.300 13:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.300 { 00:18:57.300 "cntlid": 85, 00:18:57.300 "qid": 0, 00:18:57.300 "state": "enabled", 00:18:57.300 "thread": "nvmf_tgt_poll_group_000", 00:18:57.300 "listen_address": { 00:18:57.300 "trtype": "TCP", 00:18:57.300 "adrfam": "IPv4", 00:18:57.300 "traddr": "10.0.0.2", 00:18:57.300 "trsvcid": "4420" 00:18:57.300 }, 00:18:57.300 "peer_address": { 00:18:57.300 "trtype": "TCP", 00:18:57.300 "adrfam": "IPv4", 00:18:57.300 "traddr": "10.0.0.1", 00:18:57.300 "trsvcid": "56868" 00:18:57.300 }, 00:18:57.300 "auth": { 00:18:57.300 "state": "completed", 00:18:57.300 "digest": "sha384", 00:18:57.300 "dhgroup": "ffdhe6144" 00:18:57.300 } 00:18:57.300 } 00:18:57.300 ]' 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.300 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.560 13:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.501 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.761 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.022 { 00:18:59.022 "cntlid": 87, 00:18:59.022 "qid": 0, 00:18:59.022 "state": "enabled", 00:18:59.022 "thread": "nvmf_tgt_poll_group_000", 00:18:59.022 "listen_address": { 00:18:59.022 "trtype": "TCP", 00:18:59.022 "adrfam": "IPv4", 00:18:59.022 "traddr": "10.0.0.2", 00:18:59.022 "trsvcid": "4420" 00:18:59.022 }, 00:18:59.022 "peer_address": { 00:18:59.022 "trtype": "TCP", 00:18:59.022 "adrfam": "IPv4", 00:18:59.022 "traddr": "10.0.0.1", 00:18:59.022 "trsvcid": "56890" 00:18:59.022 }, 00:18:59.022 "auth": { 00:18:59.022 "state": "completed", 00:18:59.022 "digest": "sha384", 00:18:59.022 "dhgroup": "ffdhe6144" 00:18:59.022 } 00:18:59.022 } 00:18:59.022 ]' 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.022 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.281 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.281 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.281 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.281 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.281 13:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.281 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.301 13:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.870 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.870 { 00:19:00.870 "cntlid": 89, 00:19:00.870 "qid": 0, 00:19:00.870 "state": "enabled", 00:19:00.870 "thread": "nvmf_tgt_poll_group_000", 00:19:00.870 "listen_address": { 00:19:00.870 "trtype": "TCP", 00:19:00.870 "adrfam": "IPv4", 00:19:00.870 "traddr": "10.0.0.2", 00:19:00.870 "trsvcid": "4420" 00:19:00.870 }, 00:19:00.870 "peer_address": { 00:19:00.870 "trtype": "TCP", 00:19:00.870 "adrfam": "IPv4", 00:19:00.870 "traddr": "10.0.0.1", 00:19:00.870 "trsvcid": "56906" 00:19:00.870 }, 00:19:00.870 "auth": { 00:19:00.870 "state": "completed", 00:19:00.870 "digest": "sha384", 00:19:00.870 "dhgroup": "ffdhe8192" 00:19:00.870 } 00:19:00.870 } 00:19:00.870 ]' 00:19:00.870 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.130 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.390 13:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.959 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.220 13:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.791 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.791 { 00:19:02.791 "cntlid": 91, 00:19:02.791 "qid": 0, 00:19:02.791 "state": "enabled", 00:19:02.791 "thread": "nvmf_tgt_poll_group_000", 00:19:02.791 "listen_address": { 00:19:02.791 "trtype": "TCP", 00:19:02.791 "adrfam": "IPv4", 00:19:02.791 "traddr": "10.0.0.2", 00:19:02.791 "trsvcid": "4420" 00:19:02.791 }, 00:19:02.791 "peer_address": { 00:19:02.791 "trtype": "TCP", 00:19:02.791 "adrfam": "IPv4", 00:19:02.791 "traddr": "10.0.0.1", 00:19:02.791 "trsvcid": "36208" 00:19:02.791 }, 00:19:02.791 "auth": { 00:19:02.791 "state": "completed", 00:19:02.791 "digest": "sha384", 00:19:02.791 "dhgroup": "ffdhe8192" 00:19:02.791 } 00:19:02.791 } 00:19:02.791 ]' 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.791 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.050 13:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.990 13:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.560 00:19:04.560 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.560 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.560 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.820 { 00:19:04.820 "cntlid": 93, 00:19:04.820 "qid": 0, 00:19:04.820 "state": "enabled", 00:19:04.820 "thread": "nvmf_tgt_poll_group_000", 00:19:04.820 "listen_address": { 00:19:04.820 "trtype": "TCP", 00:19:04.820 "adrfam": "IPv4", 00:19:04.820 "traddr": "10.0.0.2", 00:19:04.820 "trsvcid": "4420" 00:19:04.820 }, 00:19:04.820 "peer_address": { 00:19:04.820 "trtype": "TCP", 00:19:04.820 "adrfam": "IPv4", 00:19:04.820 "traddr": "10.0.0.1", 00:19:04.820 "trsvcid": "36240" 00:19:04.820 }, 00:19:04.820 "auth": { 00:19:04.820 "state": "completed", 00:19:04.820 "digest": "sha384", 00:19:04.820 "dhgroup": "ffdhe8192" 00:19:04.820 } 00:19:04.820 } 00:19:04.820 ]' 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.820 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.081 13:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.654 13:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.224 00:19:06.224 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.225 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.225 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.485 { 00:19:06.485 "cntlid": 95, 00:19:06.485 "qid": 0, 00:19:06.485 "state": "enabled", 00:19:06.485 "thread": "nvmf_tgt_poll_group_000", 00:19:06.485 "listen_address": { 00:19:06.485 "trtype": "TCP", 00:19:06.485 "adrfam": "IPv4", 00:19:06.485 "traddr": "10.0.0.2", 00:19:06.485 "trsvcid": "4420" 00:19:06.485 }, 00:19:06.485 "peer_address": { 00:19:06.485 "trtype": "TCP", 00:19:06.485 "adrfam": "IPv4", 00:19:06.485 "traddr": "10.0.0.1", 00:19:06.485 "trsvcid": "36256" 00:19:06.485 }, 00:19:06.485 "auth": { 00:19:06.485 "state": "completed", 00:19:06.485 "digest": "sha384", 00:19:06.485 "dhgroup": "ffdhe8192" 00:19:06.485 } 00:19:06.485 } 00:19:06.485 ]' 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.485 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.746 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.746 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.746 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.746 13:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.317 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.577 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:07.577 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.578 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.838 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.838 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.099 { 00:19:08.099 "cntlid": 97, 00:19:08.099 "qid": 0, 00:19:08.099 "state": "enabled", 00:19:08.099 "thread": "nvmf_tgt_poll_group_000", 00:19:08.099 "listen_address": { 00:19:08.099 "trtype": "TCP", 00:19:08.099 "adrfam": "IPv4", 00:19:08.099 "traddr": "10.0.0.2", 00:19:08.099 "trsvcid": "4420" 00:19:08.099 }, 00:19:08.099 "peer_address": { 00:19:08.099 "trtype": "TCP", 00:19:08.099 "adrfam": "IPv4", 00:19:08.099 "traddr": "10.0.0.1", 00:19:08.099 "trsvcid": "36294" 00:19:08.099 }, 00:19:08.099 "auth": { 00:19:08.099 "state": "completed", 00:19:08.099 "digest": "sha512", 00:19:08.099 "dhgroup": "null" 00:19:08.099 } 00:19:08.099 } 00:19:08.099 ]' 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.099 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.360 13:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.929 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.189 13:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.449 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.449 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.449 { 00:19:09.449 "cntlid": 99, 00:19:09.449 "qid": 0, 00:19:09.449 "state": "enabled", 00:19:09.449 "thread": "nvmf_tgt_poll_group_000", 00:19:09.449 "listen_address": { 00:19:09.449 "trtype": "TCP", 00:19:09.449 "adrfam": "IPv4", 00:19:09.449 "traddr": "10.0.0.2", 00:19:09.449 "trsvcid": "4420" 00:19:09.450 }, 00:19:09.450 "peer_address": { 00:19:09.450 "trtype": "TCP", 00:19:09.450 "adrfam": "IPv4", 00:19:09.450 "traddr": "10.0.0.1", 00:19:09.450 "trsvcid": "36318" 00:19:09.450 }, 00:19:09.450 "auth": { 00:19:09.450 "state": "completed", 00:19:09.450 "digest": "sha512", 00:19:09.450 "dhgroup": "null" 00:19:09.450 } 00:19:09.450 } 00:19:09.450 ]' 00:19:09.450 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.710 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.971 13:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.542 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.801 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.061 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.061 { 00:19:11.061 "cntlid": 101, 00:19:11.061 "qid": 0, 00:19:11.061 "state": "enabled", 00:19:11.061 "thread": "nvmf_tgt_poll_group_000", 00:19:11.061 "listen_address": { 00:19:11.061 "trtype": "TCP", 00:19:11.061 "adrfam": "IPv4", 00:19:11.061 "traddr": "10.0.0.2", 00:19:11.061 "trsvcid": "4420" 00:19:11.061 }, 00:19:11.061 "peer_address": { 00:19:11.061 "trtype": "TCP", 00:19:11.061 "adrfam": "IPv4", 00:19:11.061 "traddr": "10.0.0.1", 00:19:11.061 "trsvcid": "36350" 00:19:11.061 }, 00:19:11.061 "auth": { 00:19:11.061 "state": "completed", 00:19:11.061 "digest": "sha512", 00:19:11.061 "dhgroup": "null" 00:19:11.061 } 00:19:11.061 } 00:19:11.061 ]' 00:19:11.061 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.322 13:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.586 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.155 13:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.414 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.415 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.415 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.674 { 00:19:12.674 "cntlid": 103, 00:19:12.674 "qid": 0, 00:19:12.674 "state": "enabled", 00:19:12.674 "thread": "nvmf_tgt_poll_group_000", 00:19:12.674 "listen_address": { 00:19:12.674 "trtype": "TCP", 00:19:12.674 "adrfam": "IPv4", 00:19:12.674 "traddr": "10.0.0.2", 00:19:12.674 "trsvcid": "4420" 00:19:12.674 }, 00:19:12.674 "peer_address": { 00:19:12.674 "trtype": "TCP", 00:19:12.674 "adrfam": "IPv4", 00:19:12.674 "traddr": "10.0.0.1", 00:19:12.674 "trsvcid": "47590" 00:19:12.674 }, 00:19:12.674 "auth": { 00:19:12.674 "state": "completed", 00:19:12.674 "digest": "sha512", 00:19:12.674 "dhgroup": "null" 00:19:12.674 } 00:19:12.674 } 00:19:12.674 ]' 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.674 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.934 13:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:13.872 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.872 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.872 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.872 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.872 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.873 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.132 00:19:14.132 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.132 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.132 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.393 { 00:19:14.393 "cntlid": 105, 00:19:14.393 "qid": 0, 00:19:14.393 "state": "enabled", 00:19:14.393 "thread": "nvmf_tgt_poll_group_000", 00:19:14.393 "listen_address": { 00:19:14.393 "trtype": "TCP", 00:19:14.393 "adrfam": "IPv4", 00:19:14.393 "traddr": "10.0.0.2", 00:19:14.393 "trsvcid": "4420" 00:19:14.393 }, 00:19:14.393 "peer_address": { 00:19:14.393 "trtype": "TCP", 00:19:14.393 "adrfam": "IPv4", 00:19:14.393 "traddr": "10.0.0.1", 00:19:14.393 "trsvcid": "47618" 00:19:14.393 }, 00:19:14.393 "auth": { 00:19:14.393 "state": "completed", 00:19:14.393 "digest": "sha512", 00:19:14.393 "dhgroup": "ffdhe2048" 00:19:14.393 } 00:19:14.393 } 00:19:14.393 ]' 00:19:14.393 13:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.393 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.654 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.225 13:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.486 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.486 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.746 { 00:19:15.746 "cntlid": 107, 00:19:15.746 "qid": 0, 00:19:15.746 "state": "enabled", 00:19:15.746 "thread": "nvmf_tgt_poll_group_000", 00:19:15.746 "listen_address": { 00:19:15.746 "trtype": "TCP", 00:19:15.746 "adrfam": "IPv4", 00:19:15.746 "traddr": "10.0.0.2", 00:19:15.746 "trsvcid": "4420" 00:19:15.746 }, 00:19:15.746 "peer_address": { 00:19:15.746 "trtype": "TCP", 00:19:15.746 "adrfam": "IPv4", 00:19:15.746 "traddr": "10.0.0.1", 00:19:15.746 "trsvcid": "47656" 00:19:15.746 }, 00:19:15.746 "auth": { 00:19:15.746 "state": "completed", 00:19:15.746 "digest": "sha512", 00:19:15.746 "dhgroup": "ffdhe2048" 00:19:15.746 } 00:19:15.746 } 00:19:15.746 ]' 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.746 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.007 13:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.947 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.207 00:19:17.207 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.207 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.208 13:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.468 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.469 { 00:19:17.469 "cntlid": 109, 00:19:17.469 "qid": 0, 00:19:17.469 "state": "enabled", 00:19:17.469 "thread": "nvmf_tgt_poll_group_000", 00:19:17.469 "listen_address": { 00:19:17.469 "trtype": "TCP", 00:19:17.469 "adrfam": "IPv4", 00:19:17.469 "traddr": "10.0.0.2", 00:19:17.469 "trsvcid": "4420" 00:19:17.469 }, 00:19:17.469 "peer_address": { 00:19:17.469 "trtype": "TCP", 00:19:17.469 "adrfam": "IPv4", 00:19:17.469 "traddr": "10.0.0.1", 00:19:17.469 "trsvcid": "47700" 00:19:17.469 }, 00:19:17.469 "auth": { 00:19:17.469 "state": "completed", 00:19:17.469 "digest": "sha512", 00:19:17.469 "dhgroup": "ffdhe2048" 00:19:17.469 } 00:19:17.469 } 00:19:17.469 ]' 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.469 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.728 13:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:18.298 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.298 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.298 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.298 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.558 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.817 00:19:18.817 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.817 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.817 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.077 { 00:19:19.077 "cntlid": 111, 00:19:19.077 "qid": 0, 00:19:19.077 "state": "enabled", 00:19:19.077 "thread": "nvmf_tgt_poll_group_000", 00:19:19.077 "listen_address": { 00:19:19.077 "trtype": "TCP", 00:19:19.077 "adrfam": "IPv4", 00:19:19.077 "traddr": "10.0.0.2", 00:19:19.077 "trsvcid": "4420" 00:19:19.077 }, 00:19:19.077 "peer_address": { 00:19:19.077 "trtype": "TCP", 00:19:19.077 "adrfam": "IPv4", 00:19:19.077 "traddr": "10.0.0.1", 00:19:19.077 "trsvcid": "47722" 00:19:19.077 }, 00:19:19.077 "auth": { 00:19:19.077 "state": "completed", 00:19:19.077 "digest": "sha512", 00:19:19.077 "dhgroup": "ffdhe2048" 00:19:19.077 } 00:19:19.077 } 00:19:19.077 ]' 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.077 13:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.338 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:19.907 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.908 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.168 13:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.429 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.429 { 00:19:20.429 "cntlid": 113, 00:19:20.429 "qid": 0, 00:19:20.429 "state": "enabled", 00:19:20.429 "thread": "nvmf_tgt_poll_group_000", 00:19:20.429 "listen_address": { 00:19:20.429 "trtype": "TCP", 00:19:20.429 "adrfam": "IPv4", 00:19:20.429 "traddr": "10.0.0.2", 00:19:20.429 "trsvcid": "4420" 00:19:20.429 }, 00:19:20.429 "peer_address": { 00:19:20.429 "trtype": "TCP", 00:19:20.429 "adrfam": "IPv4", 00:19:20.429 "traddr": "10.0.0.1", 00:19:20.429 "trsvcid": "47752" 00:19:20.429 }, 00:19:20.429 "auth": { 00:19:20.429 "state": "completed", 00:19:20.429 "digest": "sha512", 00:19:20.429 "dhgroup": "ffdhe3072" 00:19:20.429 } 00:19:20.429 } 00:19:20.429 ]' 00:19:20.429 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.690 13:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.631 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.891 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.891 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.891 { 00:19:21.891 "cntlid": 115, 00:19:21.891 "qid": 0, 00:19:21.891 "state": "enabled", 00:19:21.891 "thread": "nvmf_tgt_poll_group_000", 00:19:21.891 "listen_address": { 00:19:21.891 "trtype": "TCP", 00:19:21.891 "adrfam": "IPv4", 00:19:21.891 "traddr": "10.0.0.2", 00:19:21.891 "trsvcid": "4420" 00:19:21.891 }, 00:19:21.891 "peer_address": { 00:19:21.892 "trtype": "TCP", 00:19:21.892 "adrfam": "IPv4", 00:19:21.892 "traddr": "10.0.0.1", 00:19:21.892 "trsvcid": "54734" 00:19:21.892 }, 00:19:21.892 "auth": { 00:19:21.892 "state": "completed", 00:19:21.892 "digest": "sha512", 00:19:21.892 "dhgroup": "ffdhe3072" 00:19:21.892 } 00:19:21.892 } 00:19:21.892 ]' 00:19:21.892 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.151 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.411 13:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.981 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.241 13:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.502 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.502 { 00:19:23.502 "cntlid": 117, 00:19:23.502 "qid": 0, 00:19:23.502 "state": "enabled", 00:19:23.502 "thread": "nvmf_tgt_poll_group_000", 00:19:23.502 "listen_address": { 00:19:23.502 "trtype": "TCP", 00:19:23.502 "adrfam": "IPv4", 00:19:23.502 "traddr": "10.0.0.2", 00:19:23.502 "trsvcid": "4420" 00:19:23.502 }, 00:19:23.502 "peer_address": { 00:19:23.502 "trtype": "TCP", 00:19:23.502 "adrfam": "IPv4", 00:19:23.502 "traddr": "10.0.0.1", 00:19:23.502 "trsvcid": "54764" 00:19:23.502 }, 00:19:23.502 "auth": { 00:19:23.502 "state": "completed", 00:19:23.502 "digest": "sha512", 00:19:23.502 "dhgroup": "ffdhe3072" 00:19:23.502 } 00:19:23.502 } 00:19:23.502 ]' 00:19:23.502 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.762 13:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:24.332 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.332 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:24.332 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.332 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.594 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.854 00:19:24.854 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.854 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.854 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.116 { 00:19:25.116 "cntlid": 119, 00:19:25.116 "qid": 0, 00:19:25.116 "state": "enabled", 00:19:25.116 "thread": "nvmf_tgt_poll_group_000", 00:19:25.116 "listen_address": { 00:19:25.116 "trtype": "TCP", 00:19:25.116 "adrfam": "IPv4", 00:19:25.116 "traddr": "10.0.0.2", 00:19:25.116 "trsvcid": "4420" 00:19:25.116 }, 00:19:25.116 "peer_address": { 00:19:25.116 "trtype": "TCP", 00:19:25.116 "adrfam": "IPv4", 00:19:25.116 "traddr": "10.0.0.1", 00:19:25.116 "trsvcid": "54792" 00:19:25.116 }, 00:19:25.116 "auth": { 00:19:25.116 "state": "completed", 00:19:25.116 "digest": "sha512", 00:19:25.116 "dhgroup": "ffdhe3072" 00:19:25.116 } 00:19:25.116 } 00:19:25.116 ]' 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.116 13:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.376 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.947 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.209 13:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.469 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.469 { 00:19:26.469 "cntlid": 121, 00:19:26.469 "qid": 0, 00:19:26.469 "state": "enabled", 00:19:26.469 "thread": "nvmf_tgt_poll_group_000", 00:19:26.469 "listen_address": { 00:19:26.469 "trtype": "TCP", 00:19:26.469 "adrfam": "IPv4", 00:19:26.469 "traddr": "10.0.0.2", 00:19:26.469 "trsvcid": "4420" 00:19:26.469 }, 00:19:26.469 "peer_address": { 00:19:26.469 "trtype": "TCP", 00:19:26.469 "adrfam": "IPv4", 00:19:26.469 "traddr": "10.0.0.1", 00:19:26.469 "trsvcid": "54830" 00:19:26.469 }, 00:19:26.469 "auth": { 00:19:26.469 "state": "completed", 00:19:26.469 "digest": "sha512", 00:19:26.469 "dhgroup": "ffdhe4096" 00:19:26.469 } 00:19:26.469 } 00:19:26.469 ]' 00:19:26.469 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.730 13:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.672 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.673 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.933 00:19:27.933 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.933 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.933 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.194 { 00:19:28.194 "cntlid": 123, 00:19:28.194 "qid": 0, 00:19:28.194 "state": "enabled", 00:19:28.194 "thread": "nvmf_tgt_poll_group_000", 00:19:28.194 "listen_address": { 00:19:28.194 "trtype": "TCP", 00:19:28.194 "adrfam": "IPv4", 00:19:28.194 "traddr": "10.0.0.2", 00:19:28.194 "trsvcid": "4420" 00:19:28.194 }, 00:19:28.194 "peer_address": { 00:19:28.194 "trtype": "TCP", 00:19:28.194 "adrfam": "IPv4", 00:19:28.194 "traddr": "10.0.0.1", 00:19:28.194 "trsvcid": "54852" 00:19:28.194 }, 00:19:28.194 "auth": { 00:19:28.194 "state": "completed", 00:19:28.194 "digest": "sha512", 00:19:28.194 "dhgroup": "ffdhe4096" 00:19:28.194 } 00:19:28.194 } 00:19:28.194 ]' 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.194 13:04:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.454 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.071 13:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.340 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:29.340 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.341 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.601 00:19:29.601 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.601 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.601 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.861 { 00:19:29.861 "cntlid": 125, 00:19:29.861 "qid": 0, 00:19:29.861 "state": "enabled", 00:19:29.861 "thread": "nvmf_tgt_poll_group_000", 00:19:29.861 "listen_address": { 00:19:29.861 "trtype": "TCP", 00:19:29.861 "adrfam": "IPv4", 00:19:29.861 "traddr": "10.0.0.2", 00:19:29.861 "trsvcid": "4420" 00:19:29.861 }, 00:19:29.861 "peer_address": { 00:19:29.861 "trtype": "TCP", 00:19:29.861 "adrfam": "IPv4", 00:19:29.861 "traddr": "10.0.0.1", 00:19:29.861 "trsvcid": "54872" 00:19:29.861 }, 00:19:29.861 "auth": { 00:19:29.861 "state": "completed", 00:19:29.861 "digest": "sha512", 00:19:29.861 "dhgroup": "ffdhe4096" 00:19:29.861 } 00:19:29.861 } 00:19:29.861 ]' 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.861 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.121 13:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.693 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.955 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.217 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.217 13:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.217 13:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.217 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.217 { 00:19:31.217 "cntlid": 127, 00:19:31.217 "qid": 0, 00:19:31.217 "state": "enabled", 00:19:31.217 "thread": "nvmf_tgt_poll_group_000", 00:19:31.217 "listen_address": { 00:19:31.217 "trtype": "TCP", 00:19:31.217 "adrfam": "IPv4", 00:19:31.217 "traddr": "10.0.0.2", 00:19:31.217 "trsvcid": "4420" 00:19:31.217 }, 00:19:31.217 "peer_address": { 00:19:31.217 "trtype": "TCP", 00:19:31.217 "adrfam": "IPv4", 00:19:31.217 "traddr": "10.0.0.1", 00:19:31.217 "trsvcid": "54894" 00:19:31.217 }, 00:19:31.217 "auth": { 00:19:31.217 "state": "completed", 00:19:31.217 "digest": "sha512", 00:19:31.217 "dhgroup": "ffdhe4096" 00:19:31.217 } 00:19:31.217 } 00:19:31.217 ]' 00:19:31.217 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.478 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.738 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.310 13:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.310 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.882 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.882 { 00:19:32.882 "cntlid": 129, 00:19:32.882 "qid": 0, 00:19:32.882 "state": "enabled", 00:19:32.882 "thread": "nvmf_tgt_poll_group_000", 00:19:32.882 "listen_address": { 00:19:32.882 "trtype": "TCP", 00:19:32.882 "adrfam": "IPv4", 00:19:32.882 "traddr": "10.0.0.2", 00:19:32.882 "trsvcid": "4420" 00:19:32.882 }, 00:19:32.882 "peer_address": { 00:19:32.882 "trtype": "TCP", 00:19:32.882 "adrfam": "IPv4", 00:19:32.882 "traddr": "10.0.0.1", 00:19:32.882 "trsvcid": "53136" 00:19:32.882 }, 00:19:32.882 "auth": { 00:19:32.882 "state": "completed", 00:19:32.882 "digest": "sha512", 00:19:32.882 "dhgroup": "ffdhe6144" 00:19:32.882 } 00:19:32.882 } 00:19:32.882 ]' 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.882 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.143 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.143 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.143 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.144 13:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.086 13:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.347 00:19:34.347 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.347 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.347 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.608 { 00:19:34.608 "cntlid": 131, 00:19:34.608 "qid": 0, 00:19:34.608 "state": "enabled", 00:19:34.608 "thread": "nvmf_tgt_poll_group_000", 00:19:34.608 "listen_address": { 00:19:34.608 "trtype": "TCP", 00:19:34.608 "adrfam": "IPv4", 00:19:34.608 "traddr": "10.0.0.2", 00:19:34.608 "trsvcid": "4420" 00:19:34.608 }, 00:19:34.608 "peer_address": { 00:19:34.608 "trtype": "TCP", 00:19:34.608 "adrfam": "IPv4", 00:19:34.608 "traddr": "10.0.0.1", 00:19:34.608 "trsvcid": "53142" 00:19:34.608 }, 00:19:34.608 "auth": { 00:19:34.608 "state": "completed", 00:19:34.608 "digest": "sha512", 00:19:34.608 "dhgroup": "ffdhe6144" 00:19:34.608 } 00:19:34.608 } 00:19:34.608 ]' 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.608 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.869 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.869 13:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.810 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.068 00:19:36.068 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.068 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.068 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.327 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.327 13:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.327 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.327 13:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.327 { 00:19:36.327 "cntlid": 133, 00:19:36.327 "qid": 0, 00:19:36.327 "state": "enabled", 00:19:36.327 "thread": "nvmf_tgt_poll_group_000", 00:19:36.327 "listen_address": { 00:19:36.327 "trtype": "TCP", 00:19:36.327 "adrfam": "IPv4", 00:19:36.327 "traddr": "10.0.0.2", 00:19:36.327 "trsvcid": "4420" 00:19:36.327 }, 00:19:36.327 "peer_address": { 00:19:36.327 "trtype": "TCP", 00:19:36.327 "adrfam": "IPv4", 00:19:36.327 "traddr": "10.0.0.1", 00:19:36.327 "trsvcid": "53160" 00:19:36.327 }, 00:19:36.327 "auth": { 00:19:36.327 "state": "completed", 00:19:36.327 "digest": "sha512", 00:19:36.327 "dhgroup": "ffdhe6144" 00:19:36.327 } 00:19:36.327 } 00:19:36.327 ]' 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.327 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.586 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.154 13:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.414 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.673 00:19:37.673 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.673 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.673 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.933 { 00:19:37.933 "cntlid": 135, 00:19:37.933 "qid": 0, 00:19:37.933 "state": "enabled", 00:19:37.933 "thread": "nvmf_tgt_poll_group_000", 00:19:37.933 "listen_address": { 00:19:37.933 "trtype": "TCP", 00:19:37.933 "adrfam": "IPv4", 00:19:37.933 "traddr": "10.0.0.2", 00:19:37.933 "trsvcid": "4420" 00:19:37.933 }, 00:19:37.933 "peer_address": { 00:19:37.933 "trtype": "TCP", 00:19:37.933 "adrfam": "IPv4", 00:19:37.933 "traddr": "10.0.0.1", 00:19:37.933 "trsvcid": "53190" 00:19:37.933 }, 00:19:37.933 "auth": { 00:19:37.933 "state": "completed", 00:19:37.933 "digest": "sha512", 00:19:37.933 "dhgroup": "ffdhe6144" 00:19:37.933 } 00:19:37.933 } 00:19:37.933 ]' 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.933 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.192 13:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.762 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.024 13:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.595 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.595 { 00:19:39.595 "cntlid": 137, 00:19:39.595 "qid": 0, 00:19:39.595 "state": "enabled", 00:19:39.595 "thread": "nvmf_tgt_poll_group_000", 00:19:39.595 "listen_address": { 00:19:39.595 "trtype": "TCP", 00:19:39.595 "adrfam": "IPv4", 00:19:39.595 "traddr": "10.0.0.2", 00:19:39.595 "trsvcid": "4420" 00:19:39.595 }, 00:19:39.595 "peer_address": { 00:19:39.595 "trtype": "TCP", 00:19:39.595 "adrfam": "IPv4", 00:19:39.595 "traddr": "10.0.0.1", 00:19:39.595 "trsvcid": "53202" 00:19:39.595 }, 00:19:39.595 "auth": { 00:19:39.595 "state": "completed", 00:19:39.595 "digest": "sha512", 00:19:39.595 "dhgroup": "ffdhe8192" 00:19:39.595 } 00:19:39.595 } 00:19:39.595 ]' 00:19:39.595 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.857 13:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.800 13:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.371 00:19:41.371 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.371 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.371 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.631 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.631 { 00:19:41.631 "cntlid": 139, 00:19:41.631 "qid": 0, 00:19:41.631 "state": "enabled", 00:19:41.631 "thread": "nvmf_tgt_poll_group_000", 00:19:41.631 "listen_address": { 00:19:41.631 "trtype": "TCP", 00:19:41.631 "adrfam": "IPv4", 00:19:41.631 "traddr": "10.0.0.2", 00:19:41.631 "trsvcid": "4420" 00:19:41.631 }, 00:19:41.631 "peer_address": { 00:19:41.631 "trtype": "TCP", 00:19:41.632 "adrfam": "IPv4", 00:19:41.632 "traddr": "10.0.0.1", 00:19:41.632 "trsvcid": "53220" 00:19:41.632 }, 00:19:41.632 "auth": { 00:19:41.632 "state": "completed", 00:19:41.632 "digest": "sha512", 00:19:41.632 "dhgroup": "ffdhe8192" 00:19:41.632 } 00:19:41.632 } 00:19:41.632 ]' 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.632 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.891 13:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZmMxMGQ2OTdkNzU5YTQ5YzUxZDFiZGMzNzg2NGUwZmTHaa7F: --dhchap-ctrl-secret DHHC-1:02:OTM0NmM2YjU2MWZlZDVjODZiMTUyYjUwNjkxODMxNjQzMGFjMmFiZjI5ZGQwMWUws/fYKw==: 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.462 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.463 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.723 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.296 00:19:43.296 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.296 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.296 13:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.296 { 00:19:43.296 "cntlid": 141, 00:19:43.296 "qid": 0, 00:19:43.296 "state": "enabled", 00:19:43.296 "thread": "nvmf_tgt_poll_group_000", 00:19:43.296 "listen_address": { 00:19:43.296 "trtype": "TCP", 00:19:43.296 "adrfam": "IPv4", 00:19:43.296 "traddr": "10.0.0.2", 00:19:43.296 "trsvcid": "4420" 00:19:43.296 }, 00:19:43.296 "peer_address": { 00:19:43.296 "trtype": "TCP", 00:19:43.296 "adrfam": "IPv4", 00:19:43.296 "traddr": "10.0.0.1", 00:19:43.296 "trsvcid": "45368" 00:19:43.296 }, 00:19:43.296 "auth": { 00:19:43.296 "state": "completed", 00:19:43.296 "digest": "sha512", 00:19:43.296 "dhgroup": "ffdhe8192" 00:19:43.296 } 00:19:43.296 } 00:19:43.296 ]' 00:19:43.296 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.557 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.817 13:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:YzViZmExYjhmMTE2MmQyNTQ4N2M1NTZkZWIzMmI4YTZlZWQ1ZjVjNTI5MWJjNWIxvzU80A==: --dhchap-ctrl-secret DHHC-1:01:NjQ3MjBjZDk5ZGI4NTY2YWZlYzE2ZTJhYmEyNzVkN2UUlNwr: 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.389 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.650 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.222 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.222 { 00:19:45.222 "cntlid": 143, 00:19:45.222 "qid": 0, 00:19:45.222 "state": "enabled", 00:19:45.222 "thread": "nvmf_tgt_poll_group_000", 00:19:45.222 "listen_address": { 00:19:45.222 "trtype": "TCP", 00:19:45.222 "adrfam": "IPv4", 00:19:45.222 "traddr": "10.0.0.2", 00:19:45.222 "trsvcid": "4420" 00:19:45.222 }, 00:19:45.222 "peer_address": { 00:19:45.222 "trtype": "TCP", 00:19:45.222 "adrfam": "IPv4", 00:19:45.222 "traddr": "10.0.0.1", 00:19:45.222 "trsvcid": "45392" 00:19:45.222 }, 00:19:45.222 "auth": { 00:19:45.222 "state": "completed", 00:19:45.222 "digest": "sha512", 00:19:45.222 "dhgroup": "ffdhe8192" 00:19:45.222 } 00:19:45.222 } 00:19:45.222 ]' 00:19:45.222 13:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.222 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.222 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.483 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:46.426 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.426 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.426 13:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.427 13:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.427 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.998 00:19:46.998 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.998 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.998 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.259 { 00:19:47.259 "cntlid": 145, 00:19:47.259 "qid": 0, 00:19:47.259 "state": "enabled", 00:19:47.259 "thread": "nvmf_tgt_poll_group_000", 00:19:47.259 "listen_address": { 00:19:47.259 "trtype": "TCP", 00:19:47.259 "adrfam": "IPv4", 00:19:47.259 "traddr": "10.0.0.2", 00:19:47.259 "trsvcid": "4420" 00:19:47.259 }, 00:19:47.259 "peer_address": { 00:19:47.259 "trtype": "TCP", 00:19:47.259 "adrfam": "IPv4", 00:19:47.259 "traddr": "10.0.0.1", 00:19:47.259 "trsvcid": "45426" 00:19:47.259 }, 00:19:47.259 "auth": { 00:19:47.259 "state": "completed", 00:19:47.259 "digest": "sha512", 00:19:47.259 "dhgroup": "ffdhe8192" 00:19:47.259 } 00:19:47.259 } 00:19:47.259 ]' 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.259 13:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.259 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.259 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.520 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:ZTM3NjllNzg0OWUwMGM3OWM2NzdkNTY2YTc1YTg3MTE1MjY4NjVmNTVjYjcxYmJlA2FDoA==: --dhchap-ctrl-secret DHHC-1:03:MDA5M2NmNTBlNTBhYTA0YWYwYjA3OGE4ZjVjY2U1ZTRiODVhYmFjNmY5NGU3ODIwMTY1MTZmMmIwNjFhYmIzN2iR7G8=: 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.092 13:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:48.665 request: 00:19:48.665 { 00:19:48.665 "name": "nvme0", 00:19:48.665 "trtype": "tcp", 00:19:48.665 "traddr": "10.0.0.2", 00:19:48.665 "adrfam": "ipv4", 00:19:48.665 "trsvcid": "4420", 00:19:48.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:48.665 "prchk_reftag": false, 00:19:48.665 "prchk_guard": false, 00:19:48.665 "hdgst": false, 00:19:48.665 "ddgst": false, 00:19:48.665 "dhchap_key": "key2", 00:19:48.665 "method": "bdev_nvme_attach_controller", 00:19:48.665 "req_id": 1 00:19:48.665 } 00:19:48.665 Got JSON-RPC error response 00:19:48.665 response: 00:19:48.665 { 00:19:48.665 "code": -5, 00:19:48.665 "message": "Input/output error" 00:19:48.665 } 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.665 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:48.926 request: 00:19:48.926 { 00:19:48.926 "name": "nvme0", 00:19:48.926 "trtype": "tcp", 00:19:48.926 "traddr": "10.0.0.2", 00:19:48.926 "adrfam": "ipv4", 00:19:48.926 "trsvcid": "4420", 00:19:48.926 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:48.926 "prchk_reftag": false, 00:19:48.926 "prchk_guard": false, 00:19:48.926 "hdgst": false, 00:19:48.926 "ddgst": false, 00:19:48.926 "dhchap_key": "key1", 00:19:48.926 "dhchap_ctrlr_key": "ckey2", 00:19:48.926 "method": "bdev_nvme_attach_controller", 00:19:48.926 "req_id": 1 00:19:48.926 } 00:19:48.926 Got JSON-RPC error response 00:19:48.926 response: 00:19:48.926 { 00:19:48.926 "code": -5, 00:19:48.926 "message": "Input/output error" 00:19:48.926 } 00:19:49.185 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.185 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.185 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.185 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.185 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.186 13:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.445 request: 00:19:49.445 { 00:19:49.445 "name": "nvme0", 00:19:49.445 "trtype": "tcp", 00:19:49.445 "traddr": "10.0.0.2", 00:19:49.445 "adrfam": "ipv4", 00:19:49.445 "trsvcid": "4420", 00:19:49.445 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:49.445 "prchk_reftag": false, 00:19:49.445 "prchk_guard": false, 00:19:49.445 "hdgst": false, 00:19:49.445 "ddgst": false, 00:19:49.445 "dhchap_key": "key1", 00:19:49.445 "dhchap_ctrlr_key": "ckey1", 00:19:49.445 "method": "bdev_nvme_attach_controller", 00:19:49.445 "req_id": 1 00:19:49.445 } 00:19:49.445 Got JSON-RPC error response 00:19:49.445 response: 00:19:49.445 { 00:19:49.445 "code": -5, 00:19:49.445 "message": "Input/output error" 00:19:49.445 } 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 681518 ']' 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 681518' 00:19:49.704 killing process with pid 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 681518 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=707031 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 707031 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 707031 ']' 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.704 13:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 707031 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 707031 ']' 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.644 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.645 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.645 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.645 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.904 13:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.475 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.475 13:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.734 { 00:19:51.734 "cntlid": 1, 00:19:51.734 "qid": 0, 00:19:51.734 "state": "enabled", 00:19:51.734 "thread": "nvmf_tgt_poll_group_000", 00:19:51.734 "listen_address": { 00:19:51.734 "trtype": "TCP", 00:19:51.734 "adrfam": "IPv4", 00:19:51.734 "traddr": "10.0.0.2", 00:19:51.734 "trsvcid": "4420" 00:19:51.734 }, 00:19:51.734 "peer_address": { 00:19:51.734 "trtype": "TCP", 00:19:51.734 "adrfam": "IPv4", 00:19:51.734 "traddr": "10.0.0.1", 00:19:51.734 "trsvcid": "45492" 00:19:51.734 }, 00:19:51.734 "auth": { 00:19:51.734 "state": "completed", 00:19:51.734 "digest": "sha512", 00:19:51.734 "dhgroup": "ffdhe8192" 00:19:51.734 } 00:19:51.734 } 00:19:51.734 ]' 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.734 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.994 13:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NzM2MGVjODFmMTY3M2NlYzc4NWRjZDk0Mzk5OTJkMjU4ZDNiZjVhNmYzZjNiZTMzYWY0MTMzOWQ2ODVmNmZkYaIefOA=: 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:52.564 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.825 request: 00:19:52.825 { 00:19:52.825 "name": "nvme0", 00:19:52.825 "trtype": "tcp", 00:19:52.825 "traddr": "10.0.0.2", 00:19:52.825 "adrfam": "ipv4", 00:19:52.825 "trsvcid": "4420", 00:19:52.825 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:52.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:52.825 "prchk_reftag": false, 00:19:52.825 "prchk_guard": false, 00:19:52.825 "hdgst": false, 00:19:52.825 "ddgst": false, 00:19:52.825 "dhchap_key": "key3", 00:19:52.825 "method": "bdev_nvme_attach_controller", 00:19:52.825 "req_id": 1 00:19:52.825 } 00:19:52.825 Got JSON-RPC error response 00:19:52.825 response: 00:19:52.825 { 00:19:52.825 "code": -5, 00:19:52.825 "message": "Input/output error" 00:19:52.825 } 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:52.825 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:53.086 request: 00:19:53.086 { 00:19:53.086 "name": "nvme0", 00:19:53.086 "trtype": "tcp", 00:19:53.086 "traddr": "10.0.0.2", 00:19:53.086 "adrfam": "ipv4", 00:19:53.086 "trsvcid": "4420", 00:19:53.086 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:53.086 "prchk_reftag": false, 00:19:53.086 "prchk_guard": false, 00:19:53.086 "hdgst": false, 00:19:53.086 "ddgst": false, 00:19:53.086 "dhchap_key": "key3", 00:19:53.086 "method": "bdev_nvme_attach_controller", 00:19:53.086 "req_id": 1 00:19:53.086 } 00:19:53.086 Got JSON-RPC error response 00:19:53.086 response: 00:19:53.086 { 00:19:53.086 "code": -5, 00:19:53.086 "message": "Input/output error" 00:19:53.086 } 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.086 13:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.347 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:53.608 request: 00:19:53.608 { 00:19:53.608 "name": "nvme0", 00:19:53.608 "trtype": "tcp", 00:19:53.608 "traddr": "10.0.0.2", 00:19:53.608 "adrfam": "ipv4", 00:19:53.608 "trsvcid": "4420", 00:19:53.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:53.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:53.608 "prchk_reftag": false, 00:19:53.608 "prchk_guard": false, 00:19:53.608 "hdgst": false, 00:19:53.608 "ddgst": false, 00:19:53.608 "dhchap_key": "key0", 00:19:53.608 "dhchap_ctrlr_key": "key1", 00:19:53.608 "method": "bdev_nvme_attach_controller", 00:19:53.608 "req_id": 1 00:19:53.608 } 00:19:53.608 Got JSON-RPC error response 00:19:53.608 response: 00:19:53.608 { 00:19:53.608 "code": -5, 00:19:53.608 "message": "Input/output error" 00:19:53.608 } 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:53.608 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:53.608 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.869 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 682073 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 682073 ']' 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 682073 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 682073 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 682073' 00:19:54.130 killing process with pid 682073 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 682073 00:19:54.130 13:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 682073 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.390 rmmod nvme_tcp 00:19:54.390 rmmod nvme_fabrics 00:19:54.390 rmmod nvme_keyring 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 707031 ']' 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 707031 00:19:54.390 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 707031 ']' 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 707031 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 707031 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 707031' 00:19:54.391 killing process with pid 707031 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 707031 00:19:54.391 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 707031 00:19:54.651 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.651 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.651 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.652 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.652 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.652 13:05:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.652 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.652 13:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.673 13:05:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.673 13:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ysV /tmp/spdk.key-sha256.TzV /tmp/spdk.key-sha384.7jP /tmp/spdk.key-sha512.dqm /tmp/spdk.key-sha512.99i /tmp/spdk.key-sha384.9ce /tmp/spdk.key-sha256.ch1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:56.673 00:19:56.673 real 2m18.655s 00:19:56.673 user 5m7.720s 00:19:56.673 sys 0m19.495s 00:19:56.673 13:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.673 13:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.673 ************************************ 00:19:56.673 END TEST nvmf_auth_target 00:19:56.673 ************************************ 00:19:56.673 13:05:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.673 13:05:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:56.673 13:05:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.673 13:05:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:56.673 13:05:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.673 13:05:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.673 ************************************ 00:19:56.673 START TEST nvmf_bdevio_no_huge 00:19:56.673 ************************************ 00:19:56.673 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:56.933 * Looking for test storage... 00:19:56.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.933 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.934 13:05:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:05.074 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:05.074 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:05.074 Found net devices under 0000:31:00.0: cvl_0_0 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:05.074 Found net devices under 0000:31:00.1: cvl_0_1 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.074 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:05.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:20:05.075 00:20:05.075 --- 10.0.0.2 ping statistics --- 00:20:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.075 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:20:05.075 00:20:05.075 --- 10.0.0.1 ping statistics --- 00:20:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.075 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=712770 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 712770 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 712770 ']' 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.075 13:05:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.075 [2024-07-15 13:05:26.830916] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:05.075 [2024-07-15 13:05:26.830979] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:05.336 [2024-07-15 13:05:26.934061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.336 [2024-07-15 13:05:27.040445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.336 [2024-07-15 13:05:27.040501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.336 [2024-07-15 13:05:27.040509] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.336 [2024-07-15 13:05:27.040515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.336 [2024-07-15 13:05:27.040521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.336 [2024-07-15 13:05:27.040685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:05.336 [2024-07-15 13:05:27.040828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:05.336 [2024-07-15 13:05:27.040989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.336 [2024-07-15 13:05:27.040989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 [2024-07-15 13:05:27.678164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 Malloc0 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.909 [2024-07-15 13:05:27.719529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.909 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.910 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.910 { 00:20:05.910 "params": { 00:20:05.910 "name": "Nvme$subsystem", 00:20:05.910 "trtype": "$TEST_TRANSPORT", 00:20:05.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.910 "adrfam": "ipv4", 00:20:05.910 "trsvcid": "$NVMF_PORT", 00:20:05.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.910 "hdgst": ${hdgst:-false}, 00:20:05.910 "ddgst": ${ddgst:-false} 00:20:05.910 }, 00:20:05.910 "method": "bdev_nvme_attach_controller" 00:20:05.910 } 00:20:05.910 EOF 00:20:05.910 )") 00:20:05.910 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:06.171 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:06.171 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:06.171 13:05:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:06.171 "params": { 00:20:06.171 "name": "Nvme1", 00:20:06.171 "trtype": "tcp", 00:20:06.171 "traddr": "10.0.0.2", 00:20:06.171 "adrfam": "ipv4", 00:20:06.171 "trsvcid": "4420", 00:20:06.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.171 "hdgst": false, 00:20:06.171 "ddgst": false 00:20:06.171 }, 00:20:06.171 "method": "bdev_nvme_attach_controller" 00:20:06.171 }' 00:20:06.171 [2024-07-15 13:05:27.771612] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:06.171 [2024-07-15 13:05:27.771684] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid712981 ] 00:20:06.171 [2024-07-15 13:05:27.849697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.171 [2024-07-15 13:05:27.946411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.171 [2024-07-15 13:05:27.946543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.171 [2024-07-15 13:05:27.946546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.431 I/O targets: 00:20:06.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:06.431 00:20:06.431 00:20:06.431 CUnit - A unit testing framework for C - Version 2.1-3 00:20:06.432 http://cunit.sourceforge.net/ 00:20:06.432 00:20:06.432 00:20:06.432 Suite: bdevio tests on: Nvme1n1 00:20:06.432 Test: blockdev write read block ...passed 00:20:06.696 Test: blockdev write zeroes read block ...passed 00:20:06.696 Test: blockdev write zeroes read no split ...passed 00:20:06.696 Test: blockdev write zeroes read split ...passed 00:20:06.696 Test: blockdev write zeroes read split partial ...passed 00:20:06.696 Test: blockdev reset ...[2024-07-15 13:05:28.386469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.696 [2024-07-15 13:05:28.386526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x995970 (9): Bad file descriptor 00:20:06.696 [2024-07-15 13:05:28.441163] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:06.696 passed 00:20:06.696 Test: blockdev write read 8 blocks ...passed 00:20:06.696 Test: blockdev write read size > 128k ...passed 00:20:06.696 Test: blockdev write read invalid size ...passed 00:20:06.958 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:06.958 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:06.958 Test: blockdev write read max offset ...passed 00:20:06.958 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:06.958 Test: blockdev writev readv 8 blocks ...passed 00:20:06.958 Test: blockdev writev readv 30 x 1block ...passed 00:20:06.958 Test: blockdev writev readv block ...passed 00:20:06.958 Test: blockdev writev readv size > 128k ...passed 00:20:06.958 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:06.958 Test: blockdev comparev and writev ...[2024-07-15 13:05:28.748385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.748409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.748420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.748430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.748944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.748953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.748958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.749493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.749501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.749511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.749516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.749984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.749990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:06.958 [2024-07-15 13:05:28.750000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:06.958 [2024-07-15 13:05:28.750005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:07.219 passed 00:20:07.219 Test: blockdev nvme passthru rw ...passed 00:20:07.219 Test: blockdev nvme passthru vendor specific ...[2024-07-15 13:05:28.835092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.219 [2024-07-15 13:05:28.835101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:07.219 [2024-07-15 13:05:28.835515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.220 [2024-07-15 13:05:28.835522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:07.220 [2024-07-15 13:05:28.835889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.220 [2024-07-15 13:05:28.835902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:07.220 [2024-07-15 13:05:28.836286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.220 [2024-07-15 13:05:28.836293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.220 passed 00:20:07.220 Test: blockdev nvme admin passthru ...passed 00:20:07.220 Test: blockdev copy ...passed 00:20:07.220 00:20:07.220 Run Summary: Type Total Ran Passed Failed Inactive 00:20:07.220 suites 1 1 n/a 0 0 00:20:07.220 tests 23 23 23 0 0 00:20:07.220 asserts 152 152 152 0 n/a 00:20:07.220 00:20:07.220 Elapsed time = 1.450 seconds 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.481 rmmod nvme_tcp 00:20:07.481 rmmod nvme_fabrics 00:20:07.481 rmmod nvme_keyring 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 712770 ']' 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 712770 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 712770 ']' 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 712770 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 712770 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 712770' 00:20:07.481 killing process with pid 712770 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 712770 00:20:07.481 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 712770 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.054 13:05:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.968 13:05:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.968 00:20:09.968 real 0m13.218s 00:20:09.968 user 0m14.651s 00:20:09.968 sys 0m7.082s 00:20:09.968 13:05:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.968 13:05:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:09.968 ************************************ 00:20:09.968 END TEST nvmf_bdevio_no_huge 00:20:09.968 ************************************ 00:20:09.968 13:05:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.968 13:05:31 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:09.968 13:05:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.968 13:05:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.968 13:05:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.968 ************************************ 00:20:09.968 START TEST nvmf_tls 00:20:09.968 ************************************ 00:20:09.968 13:05:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:10.230 * Looking for test storage... 00:20:10.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.230 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.231 13:05:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.375 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:18.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:18.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:18.376 Found net devices under 0000:31:00.0: cvl_0_0 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:18.376 Found net devices under 0000:31:00.1: cvl_0_1 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:20:18.376 00:20:18.376 --- 10.0.0.2 ping statistics --- 00:20:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.376 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:20:18.376 00:20:18.376 --- 10.0.0.1 ping statistics --- 00:20:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.376 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.376 13:05:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=717875 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 717875 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 717875 ']' 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.376 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.376 [2024-07-15 13:05:40.091343] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:18.376 [2024-07-15 13:05:40.091413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.376 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.376 [2024-07-15 13:05:40.191119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.638 [2024-07-15 13:05:40.285006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.638 [2024-07-15 13:05:40.285064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.638 [2024-07-15 13:05:40.285073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.638 [2024-07-15 13:05:40.285079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.638 [2024-07-15 13:05:40.285086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.638 [2024-07-15 13:05:40.285115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:19.211 13:05:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:19.472 true 00:20:19.472 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.472 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:19.472 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:19.472 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:19.472 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:19.734 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:19.734 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.995 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:19.995 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:19.995 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:19.995 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.995 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:20.257 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:20.257 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:20.257 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.257 13:05:41 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:20.518 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:20.518 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:20.518 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:20.518 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.518 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:20.779 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:20.779 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:20.779 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:20.779 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.779 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.muXTAcub1Y 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.auoQZugDqY 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.muXTAcub1Y 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.auoQZugDqY 00:20:21.041 13:05:42 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:21.303 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:21.566 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.muXTAcub1Y 00:20:21.566 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.muXTAcub1Y 00:20:21.566 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.827 [2024-07-15 13:05:43.418729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.828 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.828 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:22.088 [2024-07-15 13:05:43.719403] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.088 [2024-07-15 13:05:43.719576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.088 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.088 malloc0 00:20:22.088 13:05:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.348 13:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.muXTAcub1Y 00:20:22.348 [2024-07-15 13:05:44.166556] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:22.608 13:05:44 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.muXTAcub1Y 00:20:22.608 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.600 Initializing NVMe Controllers 00:20:32.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.600 Initialization complete. Launching workers. 00:20:32.600 ======================================================== 00:20:32.600 Latency(us) 00:20:32.600 Device Information : IOPS MiB/s Average min max 00:20:32.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19093.76 74.58 3351.92 1049.52 6617.63 00:20:32.600 ======================================================== 00:20:32.600 Total : 19093.76 74.58 3351.92 1049.52 6617.63 00:20:32.600 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.muXTAcub1Y 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.muXTAcub1Y' 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=720720 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 720720 /var/tmp/bdevperf.sock 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 720720 ']' 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.600 13:05:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.600 [2024-07-15 13:05:54.337196] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:32.600 [2024-07-15 13:05:54.337258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720720 ] 00:20:32.600 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.600 [2024-07-15 13:05:54.391629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.861 [2024-07-15 13:05:54.444193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.433 13:05:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.433 13:05:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:33.433 13:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.muXTAcub1Y 00:20:33.433 [2024-07-15 13:05:55.257429] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.433 [2024-07-15 13:05:55.257483] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.694 TLSTESTn1 00:20:33.694 13:05:55 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:33.694 Running I/O for 10 seconds... 00:20:43.700 00:20:43.700 Latency(us) 00:20:43.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.700 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.700 Verification LBA range: start 0x0 length 0x2000 00:20:43.700 TLSTESTn1 : 10.03 4186.82 16.35 0.00 0.00 30512.61 5352.11 51336.53 00:20:43.700 =================================================================================================================== 00:20:43.700 Total : 4186.82 16.35 0.00 0.00 30512.61 5352.11 51336.53 00:20:43.700 0 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 720720 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 720720 ']' 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 720720 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.700 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 720720 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 720720' 00:20:43.964 killing process with pid 720720 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 720720 00:20:43.964 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.964 00:20:43.964 Latency(us) 00:20:43.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.964 =================================================================================================================== 00:20:43.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.964 [2024-07-15 13:06:05.566743] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 720720 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auoQZugDqY 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auoQZugDqY 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.auoQZugDqY 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.auoQZugDqY' 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=722992 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 722992 /var/tmp/bdevperf.sock 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 722992 ']' 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.964 13:06:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.964 [2024-07-15 13:06:05.731495] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:43.965 [2024-07-15 13:06:05.731552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722992 ] 00:20:43.965 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.965 [2024-07-15 13:06:05.787945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.225 [2024-07-15 13:06:05.838657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.795 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.795 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:44.795 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.auoQZugDqY 00:20:45.055 [2024-07-15 13:06:06.647989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.055 [2024-07-15 13:06:06.648051] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:45.055 [2024-07-15 13:06:06.652412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.055 [2024-07-15 13:06:06.653061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8d80 (107): Transport endpoint is not connected 00:20:45.055 [2024-07-15 13:06:06.654055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c8d80 (9): Bad file descriptor 00:20:45.055 [2024-07-15 13:06:06.655056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:45.055 [2024-07-15 13:06:06.655064] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:45.055 [2024-07-15 13:06:06.655071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:45.055 request: 00:20:45.055 { 00:20:45.055 "name": "TLSTEST", 00:20:45.055 "trtype": "tcp", 00:20:45.055 "traddr": "10.0.0.2", 00:20:45.055 "adrfam": "ipv4", 00:20:45.055 "trsvcid": "4420", 00:20:45.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.055 "prchk_reftag": false, 00:20:45.055 "prchk_guard": false, 00:20:45.055 "hdgst": false, 00:20:45.055 "ddgst": false, 00:20:45.055 "psk": "/tmp/tmp.auoQZugDqY", 00:20:45.055 "method": "bdev_nvme_attach_controller", 00:20:45.055 "req_id": 1 00:20:45.055 } 00:20:45.055 Got JSON-RPC error response 00:20:45.055 response: 00:20:45.055 { 00:20:45.055 "code": -5, 00:20:45.055 "message": "Input/output error" 00:20:45.055 } 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 722992 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 722992 ']' 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 722992 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 722992 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 722992' 00:20:45.055 killing process with pid 722992 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 722992 00:20:45.055 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.055 00:20:45.055 Latency(us) 00:20:45.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.055 =================================================================================================================== 00:20:45.055 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.055 [2024-07-15 13:06:06.740617] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 722992 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muXTAcub1Y 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muXTAcub1Y 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.muXTAcub1Y 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.muXTAcub1Y' 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=723328 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 723328 /var/tmp/bdevperf.sock 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 723328 ']' 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.055 13:06:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.316 [2024-07-15 13:06:06.897607] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:45.316 [2024-07-15 13:06:06.897660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723328 ] 00:20:45.316 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.316 [2024-07-15 13:06:06.953846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.316 [2024-07-15 13:06:07.005601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.944 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.944 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:45.944 13:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.muXTAcub1Y 00:20:46.204 [2024-07-15 13:06:07.810972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.204 [2024-07-15 13:06:07.811036] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:46.204 [2024-07-15 13:06:07.815347] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.204 [2024-07-15 13:06:07.815366] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:46.204 [2024-07-15 13:06:07.815387] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:46.204 [2024-07-15 13:06:07.816029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2486d80 (107): Transport endpoint is not connected 00:20:46.204 [2024-07-15 13:06:07.817024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2486d80 (9): Bad file descriptor 00:20:46.204 [2024-07-15 13:06:07.818026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.204 [2024-07-15 13:06:07.818036] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:46.204 [2024-07-15 13:06:07.818043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.204 request: 00:20:46.204 { 00:20:46.204 "name": "TLSTEST", 00:20:46.204 "trtype": "tcp", 00:20:46.204 "traddr": "10.0.0.2", 00:20:46.204 "adrfam": "ipv4", 00:20:46.204 "trsvcid": "4420", 00:20:46.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.204 "prchk_reftag": false, 00:20:46.204 "prchk_guard": false, 00:20:46.204 "hdgst": false, 00:20:46.204 "ddgst": false, 00:20:46.204 "psk": "/tmp/tmp.muXTAcub1Y", 00:20:46.204 "method": "bdev_nvme_attach_controller", 00:20:46.204 "req_id": 1 00:20:46.204 } 00:20:46.204 Got JSON-RPC error response 00:20:46.204 response: 00:20:46.204 { 00:20:46.204 "code": -5, 00:20:46.204 "message": "Input/output error" 00:20:46.204 } 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 723328 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 723328 ']' 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 723328 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723328 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723328' 00:20:46.204 killing process with pid 723328 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 723328 00:20:46.204 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.204 00:20:46.204 Latency(us) 00:20:46.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.204 =================================================================================================================== 00:20:46.204 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.204 [2024-07-15 13:06:07.900114] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:46.204 13:06:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 723328 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muXTAcub1Y 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muXTAcub1Y 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.muXTAcub1Y 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.muXTAcub1Y' 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=723444 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 723444 /var/tmp/bdevperf.sock 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 723444 ']' 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.204 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.464 [2024-07-15 13:06:08.056521] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:46.464 [2024-07-15 13:06:08.056582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723444 ] 00:20:46.464 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.464 [2024-07-15 13:06:08.113473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.464 [2024-07-15 13:06:08.165530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.033 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.033 13:06:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.033 13:06:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.muXTAcub1Y 00:20:47.323 [2024-07-15 13:06:08.974909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.323 [2024-07-15 13:06:08.974971] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.323 [2024-07-15 13:06:08.986478] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.323 [2024-07-15 13:06:08.986497] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:47.323 [2024-07-15 13:06:08.986516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:47.323 [2024-07-15 13:06:08.986981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7d80 (107): Transport endpoint is not connected 00:20:47.323 [2024-07-15 13:06:08.987974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a7d80 (9): Bad file descriptor 00:20:47.323 [2024-07-15 13:06:08.988976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.323 [2024-07-15 13:06:08.988983] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:47.323 [2024-07-15 13:06:08.988991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.323 request: 00:20:47.323 { 00:20:47.323 "name": "TLSTEST", 00:20:47.323 "trtype": "tcp", 00:20:47.323 "traddr": "10.0.0.2", 00:20:47.323 "adrfam": "ipv4", 00:20:47.323 "trsvcid": "4420", 00:20:47.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:47.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.323 "prchk_reftag": false, 00:20:47.323 "prchk_guard": false, 00:20:47.323 "hdgst": false, 00:20:47.323 "ddgst": false, 00:20:47.323 "psk": "/tmp/tmp.muXTAcub1Y", 00:20:47.323 "method": "bdev_nvme_attach_controller", 00:20:47.323 "req_id": 1 00:20:47.323 } 00:20:47.323 Got JSON-RPC error response 00:20:47.323 response: 00:20:47.323 { 00:20:47.323 "code": -5, 00:20:47.323 "message": "Input/output error" 00:20:47.323 } 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 723444 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 723444 ']' 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 723444 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723444 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723444' 00:20:47.323 killing process with pid 723444 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 723444 00:20:47.323 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.323 00:20:47.323 Latency(us) 00:20:47.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.323 =================================================================================================================== 00:20:47.323 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.323 [2024-07-15 13:06:09.075804] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.323 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 723444 00:20:47.671 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=723949 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 723949 /var/tmp/bdevperf.sock 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 723949 ']' 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.672 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.672 [2024-07-15 13:06:09.242728] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:47.672 [2024-07-15 13:06:09.242783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723949 ] 00:20:47.672 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.672 [2024-07-15 13:06:09.299081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.672 [2024-07-15 13:06:09.351150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.245 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.245 13:06:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:48.245 13:06:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:48.506 [2024-07-15 13:06:10.149381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.507 [2024-07-15 13:06:10.151034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d2460 (9): Bad file descriptor 00:20:48.507 [2024-07-15 13:06:10.152033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.507 [2024-07-15 13:06:10.152042] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.507 [2024-07-15 13:06:10.152050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.507 request: 00:20:48.507 { 00:20:48.507 "name": "TLSTEST", 00:20:48.507 "trtype": "tcp", 00:20:48.507 "traddr": "10.0.0.2", 00:20:48.507 "adrfam": "ipv4", 00:20:48.507 "trsvcid": "4420", 00:20:48.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.507 "prchk_reftag": false, 00:20:48.507 "prchk_guard": false, 00:20:48.507 "hdgst": false, 00:20:48.507 "ddgst": false, 00:20:48.507 "method": "bdev_nvme_attach_controller", 00:20:48.507 "req_id": 1 00:20:48.507 } 00:20:48.507 Got JSON-RPC error response 00:20:48.507 response: 00:20:48.507 { 00:20:48.507 "code": -5, 00:20:48.507 "message": "Input/output error" 00:20:48.507 } 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 723949 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 723949 ']' 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 723949 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723949 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723949' 00:20:48.507 killing process with pid 723949 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 723949 00:20:48.507 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.507 00:20:48.507 Latency(us) 00:20:48.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.507 =================================================================================================================== 00:20:48.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.507 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 723949 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 717875 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 717875 ']' 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 717875 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 717875 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 717875' 00:20:48.768 killing process with pid 717875 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 717875 00:20:48.768 [2024-07-15 13:06:10.398539] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 717875 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.PAxKqfZ8FO 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.PAxKqfZ8FO 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=724483 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 724483 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 724483 ']' 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:48.768 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.769 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:48.769 13:06:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.030 [2024-07-15 13:06:10.636014] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:49.030 [2024-07-15 13:06:10.636079] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.030 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.030 [2024-07-15 13:06:10.728931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.030 [2024-07-15 13:06:10.785186] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.030 [2024-07-15 13:06:10.785220] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.030 [2024-07-15 13:06:10.785226] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.030 [2024-07-15 13:06:10.785236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.030 [2024-07-15 13:06:10.785241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.030 [2024-07-15 13:06:10.785260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.601 13:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:49.601 13:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:49.601 13:06:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:49.601 13:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:49.601 13:06:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.862 13:06:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.862 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:20:49.862 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PAxKqfZ8FO 00:20:49.862 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.862 [2024-07-15 13:06:11.575757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.862 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.122 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.122 [2024-07-15 13:06:11.888510] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.122 [2024-07-15 13:06:11.888677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.122 13:06:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.384 malloc0 00:20:50.384 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.384 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:20:50.645 [2024-07-15 13:06:12.323408] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PAxKqfZ8FO 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PAxKqfZ8FO' 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=724849 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 724849 /var/tmp/bdevperf.sock 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 724849 ']' 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.645 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.646 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.646 13:06:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.646 [2024-07-15 13:06:12.373046] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:20:50.646 [2024-07-15 13:06:12.373115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724849 ] 00:20:50.646 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.646 [2024-07-15 13:06:12.433790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.907 [2024-07-15 13:06:12.485592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.477 13:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.477 13:06:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:51.477 13:06:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:20:51.738 [2024-07-15 13:06:13.306891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.738 [2024-07-15 13:06:13.306952] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:51.738 TLSTESTn1 00:20:51.738 13:06:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.738 Running I/O for 10 seconds... 00:21:01.746 00:21:01.746 Latency(us) 00:21:01.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.746 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.746 Verification LBA range: start 0x0 length 0x2000 00:21:01.746 TLSTESTn1 : 10.02 5233.49 20.44 0.00 0.00 24417.06 4669.44 80390.83 00:21:01.746 =================================================================================================================== 00:21:01.746 Total : 5233.49 20.44 0.00 0.00 24417.06 4669.44 80390.83 00:21:01.746 0 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 724849 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 724849 ']' 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 724849 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.746 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724849 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724849' 00:21:02.007 killing process with pid 724849 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 724849 00:21:02.007 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.007 00:21:02.007 Latency(us) 00:21:02.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.007 =================================================================================================================== 00:21:02.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.007 [2024-07-15 13:06:23.606818] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 724849 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.PAxKqfZ8FO 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PAxKqfZ8FO 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PAxKqfZ8FO 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PAxKqfZ8FO 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PAxKqfZ8FO' 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=727013 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 727013 /var/tmp/bdevperf.sock 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 727013 ']' 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.007 13:06:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.007 [2024-07-15 13:06:23.777135] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:02.007 [2024-07-15 13:06:23.777191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727013 ] 00:21:02.007 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.269 [2024-07-15 13:06:23.834715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.269 [2024-07-15 13:06:23.886746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.864 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.864 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:02.864 13:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:21:02.864 [2024-07-15 13:06:24.683996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.864 [2024-07-15 13:06:24.684039] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:02.864 [2024-07-15 13:06:24.684044] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.PAxKqfZ8FO 00:21:02.864 request: 00:21:02.864 { 00:21:02.864 "name": "TLSTEST", 00:21:02.864 "trtype": "tcp", 00:21:02.864 "traddr": "10.0.0.2", 00:21:02.864 "adrfam": "ipv4", 00:21:02.864 "trsvcid": "4420", 00:21:02.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.864 "prchk_reftag": false, 00:21:02.864 "prchk_guard": false, 00:21:02.864 "hdgst": false, 00:21:02.864 "ddgst": false, 00:21:02.864 "psk": "/tmp/tmp.PAxKqfZ8FO", 00:21:02.864 "method": "bdev_nvme_attach_controller", 00:21:02.864 "req_id": 1 00:21:02.864 } 00:21:02.864 Got JSON-RPC error response 00:21:02.864 response: 00:21:02.864 { 00:21:02.864 "code": -1, 00:21:02.864 "message": "Operation not permitted" 00:21:02.864 } 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 727013 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 727013 ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 727013 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727013 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727013' 00:21:03.125 killing process with pid 727013 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 727013 00:21:03.125 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.125 00:21:03.125 Latency(us) 00:21:03.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.125 =================================================================================================================== 00:21:03.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 727013 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 724483 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 724483 ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 724483 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724483 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724483' 00:21:03.125 killing process with pid 724483 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 724483 00:21:03.125 [2024-07-15 13:06:24.930295] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:03.125 13:06:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 724483 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=727220 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 727220 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 727220 ']' 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.386 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.386 [2024-07-15 13:06:25.108898] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:03.386 [2024-07-15 13:06:25.108949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.386 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.386 [2024-07-15 13:06:25.202269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.647 [2024-07-15 13:06:25.254596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.647 [2024-07-15 13:06:25.254631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.647 [2024-07-15 13:06:25.254637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.647 [2024-07-15 13:06:25.254641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.647 [2024-07-15 13:06:25.254645] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.647 [2024-07-15 13:06:25.254663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PAxKqfZ8FO 00:21:04.217 13:06:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:04.479 [2024-07-15 13:06:26.064834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.479 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:04.479 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:04.740 [2024-07-15 13:06:26.373576] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.740 [2024-07-15 13:06:26.373744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.740 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:04.740 malloc0 00:21:04.740 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:21:05.001 [2024-07-15 13:06:26.796331] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:05.001 [2024-07-15 13:06:26.796350] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:05.001 [2024-07-15 13:06:26.796370] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:05.001 request: 00:21:05.001 { 00:21:05.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.001 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.001 "psk": "/tmp/tmp.PAxKqfZ8FO", 00:21:05.001 "method": "nvmf_subsystem_add_host", 00:21:05.001 "req_id": 1 00:21:05.001 } 00:21:05.001 Got JSON-RPC error response 00:21:05.001 response: 00:21:05.001 { 00:21:05.001 "code": -32603, 00:21:05.001 "message": "Internal error" 00:21:05.001 } 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 727220 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 727220 ']' 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 727220 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.001 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727220 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727220' 00:21:05.262 killing process with pid 727220 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 727220 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 727220 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.PAxKqfZ8FO 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=727630 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 727630 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 727630 ']' 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.262 13:06:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.262 [2024-07-15 13:06:27.049226] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:05.262 [2024-07-15 13:06:27.049287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.262 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.523 [2024-07-15 13:06:27.137138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.523 [2024-07-15 13:06:27.195259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.523 [2024-07-15 13:06:27.195295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.523 [2024-07-15 13:06:27.195301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.523 [2024-07-15 13:06:27.195306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.523 [2024-07-15 13:06:27.195310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.523 [2024-07-15 13:06:27.195327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PAxKqfZ8FO 00:21:06.094 13:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:06.362 [2024-07-15 13:06:27.986515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.362 13:06:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:06.362 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:06.622 [2024-07-15 13:06:28.287245] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.622 [2024-07-15 13:06:28.287400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.622 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:06.622 malloc0 00:21:06.622 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:06.881 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:21:07.141 [2024-07-15 13:06:28.726061] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=727986 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 727986 /var/tmp/bdevperf.sock 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 727986 ']' 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.141 13:06:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.141 [2024-07-15 13:06:28.762702] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:07.141 [2024-07-15 13:06:28.762744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727986 ] 00:21:07.141 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.141 [2024-07-15 13:06:28.832524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.141 [2024-07-15 13:06:28.884289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.079 13:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.079 13:06:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.079 13:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:21:08.079 [2024-07-15 13:06:29.697574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.079 [2024-07-15 13:06:29.697632] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.079 TLSTESTn1 00:21:08.079 13:06:29 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:08.338 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:08.338 "subsystems": [ 00:21:08.338 { 00:21:08.338 "subsystem": "keyring", 00:21:08.338 "config": [] 00:21:08.338 }, 00:21:08.338 { 00:21:08.338 "subsystem": "iobuf", 00:21:08.338 "config": [ 00:21:08.338 { 00:21:08.338 "method": "iobuf_set_options", 00:21:08.338 "params": { 00:21:08.338 "small_pool_count": 8192, 00:21:08.338 "large_pool_count": 1024, 00:21:08.338 "small_bufsize": 8192, 00:21:08.338 "large_bufsize": 135168 00:21:08.338 } 00:21:08.338 } 00:21:08.338 ] 00:21:08.338 }, 00:21:08.338 { 00:21:08.338 "subsystem": "sock", 00:21:08.338 "config": [ 00:21:08.338 { 00:21:08.338 "method": "sock_set_default_impl", 00:21:08.338 "params": { 00:21:08.338 "impl_name": "posix" 00:21:08.338 } 00:21:08.338 }, 00:21:08.338 { 00:21:08.338 "method": "sock_impl_set_options", 00:21:08.338 "params": { 00:21:08.338 "impl_name": "ssl", 00:21:08.338 "recv_buf_size": 4096, 00:21:08.338 "send_buf_size": 4096, 00:21:08.338 "enable_recv_pipe": true, 00:21:08.338 "enable_quickack": false, 00:21:08.338 "enable_placement_id": 0, 00:21:08.338 "enable_zerocopy_send_server": true, 00:21:08.338 "enable_zerocopy_send_client": false, 00:21:08.338 "zerocopy_threshold": 0, 00:21:08.338 "tls_version": 0, 00:21:08.338 "enable_ktls": false 00:21:08.338 } 00:21:08.338 }, 00:21:08.338 { 00:21:08.338 "method": "sock_impl_set_options", 00:21:08.338 "params": { 00:21:08.338 "impl_name": "posix", 00:21:08.338 "recv_buf_size": 2097152, 00:21:08.338 "send_buf_size": 2097152, 00:21:08.338 "enable_recv_pipe": true, 00:21:08.338 "enable_quickack": false, 00:21:08.338 "enable_placement_id": 0, 00:21:08.338 "enable_zerocopy_send_server": true, 00:21:08.338 "enable_zerocopy_send_client": false, 00:21:08.339 "zerocopy_threshold": 0, 00:21:08.339 "tls_version": 0, 00:21:08.339 "enable_ktls": false 00:21:08.339 } 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "vmd", 00:21:08.339 "config": [] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "accel", 00:21:08.339 "config": [ 00:21:08.339 { 00:21:08.339 "method": "accel_set_options", 00:21:08.339 "params": { 00:21:08.339 "small_cache_size": 128, 00:21:08.339 "large_cache_size": 16, 00:21:08.339 "task_count": 2048, 00:21:08.339 "sequence_count": 2048, 00:21:08.339 "buf_count": 2048 00:21:08.339 } 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "bdev", 00:21:08.339 "config": [ 00:21:08.339 { 00:21:08.339 "method": "bdev_set_options", 00:21:08.339 "params": { 00:21:08.339 "bdev_io_pool_size": 65535, 00:21:08.339 "bdev_io_cache_size": 256, 00:21:08.339 "bdev_auto_examine": true, 00:21:08.339 "iobuf_small_cache_size": 128, 00:21:08.339 "iobuf_large_cache_size": 16 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_raid_set_options", 00:21:08.339 "params": { 00:21:08.339 "process_window_size_kb": 1024 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_iscsi_set_options", 00:21:08.339 "params": { 00:21:08.339 "timeout_sec": 30 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_nvme_set_options", 00:21:08.339 "params": { 00:21:08.339 "action_on_timeout": "none", 00:21:08.339 "timeout_us": 0, 00:21:08.339 "timeout_admin_us": 0, 00:21:08.339 "keep_alive_timeout_ms": 10000, 00:21:08.339 "arbitration_burst": 0, 00:21:08.339 "low_priority_weight": 0, 00:21:08.339 "medium_priority_weight": 0, 00:21:08.339 "high_priority_weight": 0, 00:21:08.339 "nvme_adminq_poll_period_us": 10000, 00:21:08.339 "nvme_ioq_poll_period_us": 0, 00:21:08.339 "io_queue_requests": 0, 00:21:08.339 "delay_cmd_submit": true, 00:21:08.339 "transport_retry_count": 4, 00:21:08.339 "bdev_retry_count": 3, 00:21:08.339 "transport_ack_timeout": 0, 00:21:08.339 "ctrlr_loss_timeout_sec": 0, 00:21:08.339 "reconnect_delay_sec": 0, 00:21:08.339 "fast_io_fail_timeout_sec": 0, 00:21:08.339 "disable_auto_failback": false, 00:21:08.339 "generate_uuids": false, 00:21:08.339 "transport_tos": 0, 00:21:08.339 "nvme_error_stat": false, 00:21:08.339 "rdma_srq_size": 0, 00:21:08.339 "io_path_stat": false, 00:21:08.339 "allow_accel_sequence": false, 00:21:08.339 "rdma_max_cq_size": 0, 00:21:08.339 "rdma_cm_event_timeout_ms": 0, 00:21:08.339 "dhchap_digests": [ 00:21:08.339 "sha256", 00:21:08.339 "sha384", 00:21:08.339 "sha512" 00:21:08.339 ], 00:21:08.339 "dhchap_dhgroups": [ 00:21:08.339 "null", 00:21:08.339 "ffdhe2048", 00:21:08.339 "ffdhe3072", 00:21:08.339 "ffdhe4096", 00:21:08.339 "ffdhe6144", 00:21:08.339 "ffdhe8192" 00:21:08.339 ] 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_nvme_set_hotplug", 00:21:08.339 "params": { 00:21:08.339 "period_us": 100000, 00:21:08.339 "enable": false 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_malloc_create", 00:21:08.339 "params": { 00:21:08.339 "name": "malloc0", 00:21:08.339 "num_blocks": 8192, 00:21:08.339 "block_size": 4096, 00:21:08.339 "physical_block_size": 4096, 00:21:08.339 "uuid": "e75c6b57-d6f2-4698-b0d6-c9dc03a7fd39", 00:21:08.339 "optimal_io_boundary": 0 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "bdev_wait_for_examine" 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "nbd", 00:21:08.339 "config": [] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "scheduler", 00:21:08.339 "config": [ 00:21:08.339 { 00:21:08.339 "method": "framework_set_scheduler", 00:21:08.339 "params": { 00:21:08.339 "name": "static" 00:21:08.339 } 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "subsystem": "nvmf", 00:21:08.339 "config": [ 00:21:08.339 { 00:21:08.339 "method": "nvmf_set_config", 00:21:08.339 "params": { 00:21:08.339 "discovery_filter": "match_any", 00:21:08.339 "admin_cmd_passthru": { 00:21:08.339 "identify_ctrlr": false 00:21:08.339 } 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_set_max_subsystems", 00:21:08.339 "params": { 00:21:08.339 "max_subsystems": 1024 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_set_crdt", 00:21:08.339 "params": { 00:21:08.339 "crdt1": 0, 00:21:08.339 "crdt2": 0, 00:21:08.339 "crdt3": 0 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_create_transport", 00:21:08.339 "params": { 00:21:08.339 "trtype": "TCP", 00:21:08.339 "max_queue_depth": 128, 00:21:08.339 "max_io_qpairs_per_ctrlr": 127, 00:21:08.339 "in_capsule_data_size": 4096, 00:21:08.339 "max_io_size": 131072, 00:21:08.339 "io_unit_size": 131072, 00:21:08.339 "max_aq_depth": 128, 00:21:08.339 "num_shared_buffers": 511, 00:21:08.339 "buf_cache_size": 4294967295, 00:21:08.339 "dif_insert_or_strip": false, 00:21:08.339 "zcopy": false, 00:21:08.339 "c2h_success": false, 00:21:08.339 "sock_priority": 0, 00:21:08.339 "abort_timeout_sec": 1, 00:21:08.339 "ack_timeout": 0, 00:21:08.339 "data_wr_pool_size": 0 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_create_subsystem", 00:21:08.339 "params": { 00:21:08.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.339 "allow_any_host": false, 00:21:08.339 "serial_number": "SPDK00000000000001", 00:21:08.339 "model_number": "SPDK bdev Controller", 00:21:08.339 "max_namespaces": 10, 00:21:08.339 "min_cntlid": 1, 00:21:08.339 "max_cntlid": 65519, 00:21:08.339 "ana_reporting": false 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_subsystem_add_host", 00:21:08.339 "params": { 00:21:08.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.339 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.339 "psk": "/tmp/tmp.PAxKqfZ8FO" 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_subsystem_add_ns", 00:21:08.339 "params": { 00:21:08.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.339 "namespace": { 00:21:08.339 "nsid": 1, 00:21:08.339 "bdev_name": "malloc0", 00:21:08.339 "nguid": "E75C6B57D6F24698B0D6C9DC03A7FD39", 00:21:08.339 "uuid": "e75c6b57-d6f2-4698-b0d6-c9dc03a7fd39", 00:21:08.339 "no_auto_visible": false 00:21:08.339 } 00:21:08.339 } 00:21:08.339 }, 00:21:08.339 { 00:21:08.339 "method": "nvmf_subsystem_add_listener", 00:21:08.339 "params": { 00:21:08.339 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.339 "listen_address": { 00:21:08.339 "trtype": "TCP", 00:21:08.339 "adrfam": "IPv4", 00:21:08.339 "traddr": "10.0.0.2", 00:21:08.339 "trsvcid": "4420" 00:21:08.339 }, 00:21:08.339 "secure_channel": true 00:21:08.339 } 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 } 00:21:08.339 ] 00:21:08.339 }' 00:21:08.339 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:08.599 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:08.599 "subsystems": [ 00:21:08.599 { 00:21:08.599 "subsystem": "keyring", 00:21:08.599 "config": [] 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "subsystem": "iobuf", 00:21:08.599 "config": [ 00:21:08.599 { 00:21:08.599 "method": "iobuf_set_options", 00:21:08.599 "params": { 00:21:08.599 "small_pool_count": 8192, 00:21:08.599 "large_pool_count": 1024, 00:21:08.599 "small_bufsize": 8192, 00:21:08.599 "large_bufsize": 135168 00:21:08.599 } 00:21:08.599 } 00:21:08.599 ] 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "subsystem": "sock", 00:21:08.599 "config": [ 00:21:08.599 { 00:21:08.599 "method": "sock_set_default_impl", 00:21:08.599 "params": { 00:21:08.599 "impl_name": "posix" 00:21:08.599 } 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "method": "sock_impl_set_options", 00:21:08.599 "params": { 00:21:08.599 "impl_name": "ssl", 00:21:08.599 "recv_buf_size": 4096, 00:21:08.599 "send_buf_size": 4096, 00:21:08.599 "enable_recv_pipe": true, 00:21:08.599 "enable_quickack": false, 00:21:08.599 "enable_placement_id": 0, 00:21:08.599 "enable_zerocopy_send_server": true, 00:21:08.599 "enable_zerocopy_send_client": false, 00:21:08.599 "zerocopy_threshold": 0, 00:21:08.599 "tls_version": 0, 00:21:08.599 "enable_ktls": false 00:21:08.599 } 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "method": "sock_impl_set_options", 00:21:08.599 "params": { 00:21:08.599 "impl_name": "posix", 00:21:08.599 "recv_buf_size": 2097152, 00:21:08.599 "send_buf_size": 2097152, 00:21:08.599 "enable_recv_pipe": true, 00:21:08.599 "enable_quickack": false, 00:21:08.599 "enable_placement_id": 0, 00:21:08.599 "enable_zerocopy_send_server": true, 00:21:08.599 "enable_zerocopy_send_client": false, 00:21:08.599 "zerocopy_threshold": 0, 00:21:08.599 "tls_version": 0, 00:21:08.599 "enable_ktls": false 00:21:08.599 } 00:21:08.599 } 00:21:08.599 ] 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "subsystem": "vmd", 00:21:08.599 "config": [] 00:21:08.599 }, 00:21:08.599 { 00:21:08.599 "subsystem": "accel", 00:21:08.600 "config": [ 00:21:08.600 { 00:21:08.600 "method": "accel_set_options", 00:21:08.600 "params": { 00:21:08.600 "small_cache_size": 128, 00:21:08.600 "large_cache_size": 16, 00:21:08.600 "task_count": 2048, 00:21:08.600 "sequence_count": 2048, 00:21:08.600 "buf_count": 2048 00:21:08.600 } 00:21:08.600 } 00:21:08.600 ] 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "subsystem": "bdev", 00:21:08.600 "config": [ 00:21:08.600 { 00:21:08.600 "method": "bdev_set_options", 00:21:08.600 "params": { 00:21:08.600 "bdev_io_pool_size": 65535, 00:21:08.600 "bdev_io_cache_size": 256, 00:21:08.600 "bdev_auto_examine": true, 00:21:08.600 "iobuf_small_cache_size": 128, 00:21:08.600 "iobuf_large_cache_size": 16 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_raid_set_options", 00:21:08.600 "params": { 00:21:08.600 "process_window_size_kb": 1024 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_iscsi_set_options", 00:21:08.600 "params": { 00:21:08.600 "timeout_sec": 30 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_nvme_set_options", 00:21:08.600 "params": { 00:21:08.600 "action_on_timeout": "none", 00:21:08.600 "timeout_us": 0, 00:21:08.600 "timeout_admin_us": 0, 00:21:08.600 "keep_alive_timeout_ms": 10000, 00:21:08.600 "arbitration_burst": 0, 00:21:08.600 "low_priority_weight": 0, 00:21:08.600 "medium_priority_weight": 0, 00:21:08.600 "high_priority_weight": 0, 00:21:08.600 "nvme_adminq_poll_period_us": 10000, 00:21:08.600 "nvme_ioq_poll_period_us": 0, 00:21:08.600 "io_queue_requests": 512, 00:21:08.600 "delay_cmd_submit": true, 00:21:08.600 "transport_retry_count": 4, 00:21:08.600 "bdev_retry_count": 3, 00:21:08.600 "transport_ack_timeout": 0, 00:21:08.600 "ctrlr_loss_timeout_sec": 0, 00:21:08.600 "reconnect_delay_sec": 0, 00:21:08.600 "fast_io_fail_timeout_sec": 0, 00:21:08.600 "disable_auto_failback": false, 00:21:08.600 "generate_uuids": false, 00:21:08.600 "transport_tos": 0, 00:21:08.600 "nvme_error_stat": false, 00:21:08.600 "rdma_srq_size": 0, 00:21:08.600 "io_path_stat": false, 00:21:08.600 "allow_accel_sequence": false, 00:21:08.600 "rdma_max_cq_size": 0, 00:21:08.600 "rdma_cm_event_timeout_ms": 0, 00:21:08.600 "dhchap_digests": [ 00:21:08.600 "sha256", 00:21:08.600 "sha384", 00:21:08.600 "sha512" 00:21:08.600 ], 00:21:08.600 "dhchap_dhgroups": [ 00:21:08.600 "null", 00:21:08.600 "ffdhe2048", 00:21:08.600 "ffdhe3072", 00:21:08.600 "ffdhe4096", 00:21:08.600 "ffdhe6144", 00:21:08.600 "ffdhe8192" 00:21:08.600 ] 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_nvme_attach_controller", 00:21:08.600 "params": { 00:21:08.600 "name": "TLSTEST", 00:21:08.600 "trtype": "TCP", 00:21:08.600 "adrfam": "IPv4", 00:21:08.600 "traddr": "10.0.0.2", 00:21:08.600 "trsvcid": "4420", 00:21:08.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.600 "prchk_reftag": false, 00:21:08.600 "prchk_guard": false, 00:21:08.600 "ctrlr_loss_timeout_sec": 0, 00:21:08.600 "reconnect_delay_sec": 0, 00:21:08.600 "fast_io_fail_timeout_sec": 0, 00:21:08.600 "psk": "/tmp/tmp.PAxKqfZ8FO", 00:21:08.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.600 "hdgst": false, 00:21:08.600 "ddgst": false 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_nvme_set_hotplug", 00:21:08.600 "params": { 00:21:08.600 "period_us": 100000, 00:21:08.600 "enable": false 00:21:08.600 } 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "method": "bdev_wait_for_examine" 00:21:08.600 } 00:21:08.600 ] 00:21:08.600 }, 00:21:08.600 { 00:21:08.600 "subsystem": "nbd", 00:21:08.600 "config": [] 00:21:08.600 } 00:21:08.600 ] 00:21:08.600 }' 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 727986 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 727986 ']' 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 727986 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727986 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727986' 00:21:08.600 killing process with pid 727986 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 727986 00:21:08.600 Received shutdown signal, test time was about 10.000000 seconds 00:21:08.600 00:21:08.600 Latency(us) 00:21:08.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.600 =================================================================================================================== 00:21:08.600 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:08.600 [2024-07-15 13:06:30.334620] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:08.600 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 727986 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 727630 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 727630 ']' 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 727630 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727630 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727630' 00:21:08.861 killing process with pid 727630 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 727630 00:21:08.861 [2024-07-15 13:06:30.504846] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 727630 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.861 13:06:30 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:08.861 "subsystems": [ 00:21:08.861 { 00:21:08.861 "subsystem": "keyring", 00:21:08.861 "config": [] 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "subsystem": "iobuf", 00:21:08.861 "config": [ 00:21:08.861 { 00:21:08.861 "method": "iobuf_set_options", 00:21:08.861 "params": { 00:21:08.861 "small_pool_count": 8192, 00:21:08.861 "large_pool_count": 1024, 00:21:08.861 "small_bufsize": 8192, 00:21:08.861 "large_bufsize": 135168 00:21:08.861 } 00:21:08.861 } 00:21:08.861 ] 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "subsystem": "sock", 00:21:08.861 "config": [ 00:21:08.861 { 00:21:08.861 "method": "sock_set_default_impl", 00:21:08.861 "params": { 00:21:08.861 "impl_name": "posix" 00:21:08.861 } 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "method": "sock_impl_set_options", 00:21:08.861 "params": { 00:21:08.861 "impl_name": "ssl", 00:21:08.861 "recv_buf_size": 4096, 00:21:08.861 "send_buf_size": 4096, 00:21:08.861 "enable_recv_pipe": true, 00:21:08.862 "enable_quickack": false, 00:21:08.862 "enable_placement_id": 0, 00:21:08.862 "enable_zerocopy_send_server": true, 00:21:08.862 "enable_zerocopy_send_client": false, 00:21:08.862 "zerocopy_threshold": 0, 00:21:08.862 "tls_version": 0, 00:21:08.862 "enable_ktls": false 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "sock_impl_set_options", 00:21:08.862 "params": { 00:21:08.862 "impl_name": "posix", 00:21:08.862 "recv_buf_size": 2097152, 00:21:08.862 "send_buf_size": 2097152, 00:21:08.862 "enable_recv_pipe": true, 00:21:08.862 "enable_quickack": false, 00:21:08.862 "enable_placement_id": 0, 00:21:08.862 "enable_zerocopy_send_server": true, 00:21:08.862 "enable_zerocopy_send_client": false, 00:21:08.862 "zerocopy_threshold": 0, 00:21:08.862 "tls_version": 0, 00:21:08.862 "enable_ktls": false 00:21:08.862 } 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "vmd", 00:21:08.862 "config": [] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "accel", 00:21:08.862 "config": [ 00:21:08.862 { 00:21:08.862 "method": "accel_set_options", 00:21:08.862 "params": { 00:21:08.862 "small_cache_size": 128, 00:21:08.862 "large_cache_size": 16, 00:21:08.862 "task_count": 2048, 00:21:08.862 "sequence_count": 2048, 00:21:08.862 "buf_count": 2048 00:21:08.862 } 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "bdev", 00:21:08.862 "config": [ 00:21:08.862 { 00:21:08.862 "method": "bdev_set_options", 00:21:08.862 "params": { 00:21:08.862 "bdev_io_pool_size": 65535, 00:21:08.862 "bdev_io_cache_size": 256, 00:21:08.862 "bdev_auto_examine": true, 00:21:08.862 "iobuf_small_cache_size": 128, 00:21:08.862 "iobuf_large_cache_size": 16 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_raid_set_options", 00:21:08.862 "params": { 00:21:08.862 "process_window_size_kb": 1024 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_iscsi_set_options", 00:21:08.862 "params": { 00:21:08.862 "timeout_sec": 30 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_nvme_set_options", 00:21:08.862 "params": { 00:21:08.862 "action_on_timeout": "none", 00:21:08.862 "timeout_us": 0, 00:21:08.862 "timeout_admin_us": 0, 00:21:08.862 "keep_alive_timeout_ms": 10000, 00:21:08.862 "arbitration_burst": 0, 00:21:08.862 "low_priority_weight": 0, 00:21:08.862 "medium_priority_weight": 0, 00:21:08.862 "high_priority_weight": 0, 00:21:08.862 "nvme_adminq_poll_period_us": 10000, 00:21:08.862 "nvme_ioq_poll_period_us": 0, 00:21:08.862 "io_queue_requests": 0, 00:21:08.862 "delay_cmd_submit": true, 00:21:08.862 "transport_retry_count": 4, 00:21:08.862 "bdev_retry_count": 3, 00:21:08.862 "transport_ack_timeout": 0, 00:21:08.862 "ctrlr_loss_timeout_sec": 0, 00:21:08.862 "reconnect_delay_sec": 0, 00:21:08.862 "fast_io_fail_timeout_sec": 0, 00:21:08.862 "disable_auto_failback": false, 00:21:08.862 "generate_uuids": false, 00:21:08.862 "transport_tos": 0, 00:21:08.862 "nvme_error_stat": false, 00:21:08.862 "rdma_srq_size": 0, 00:21:08.862 "io_path_stat": false, 00:21:08.862 "allow_accel_sequence": false, 00:21:08.862 "rdma_max_cq_size": 0, 00:21:08.862 "rdma_cm_event_timeout_ms": 0, 00:21:08.862 "dhchap_digests": [ 00:21:08.862 "sha256", 00:21:08.862 "sha384", 00:21:08.862 "sha512" 00:21:08.862 ], 00:21:08.862 "dhchap_dhgroups": [ 00:21:08.862 "null", 00:21:08.862 "ffdhe2048", 00:21:08.862 "ffdhe3072", 00:21:08.862 "ffdhe4096", 00:21:08.862 "ffdhe6144", 00:21:08.862 "ffdhe8192" 00:21:08.862 ] 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_nvme_set_hotplug", 00:21:08.862 "params": { 00:21:08.862 "period_us": 100000, 00:21:08.862 "enable": false 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_malloc_create", 00:21:08.862 "params": { 00:21:08.862 "name": "malloc0", 00:21:08.862 "num_blocks": 8192, 00:21:08.862 "block_size": 4096, 00:21:08.862 "physical_block_size": 4096, 00:21:08.862 "uuid": "e75c6b57-d6f2-4698-b0d6-c9dc03a7fd39", 00:21:08.862 "optimal_io_boundary": 0 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "bdev_wait_for_examine" 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "nbd", 00:21:08.862 "config": [] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "scheduler", 00:21:08.862 "config": [ 00:21:08.862 { 00:21:08.862 "method": "framework_set_scheduler", 00:21:08.862 "params": { 00:21:08.862 "name": "static" 00:21:08.862 } 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "subsystem": "nvmf", 00:21:08.862 "config": [ 00:21:08.862 { 00:21:08.862 "method": "nvmf_set_config", 00:21:08.862 "params": { 00:21:08.862 "discovery_filter": "match_any", 00:21:08.862 "admin_cmd_passthru": { 00:21:08.862 "identify_ctrlr": false 00:21:08.862 } 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_set_max_subsystems", 00:21:08.862 "params": { 00:21:08.862 "max_subsystems": 1024 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_set_crdt", 00:21:08.862 "params": { 00:21:08.862 "crdt1": 0, 00:21:08.862 "crdt2": 0, 00:21:08.862 "crdt3": 0 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_create_transport", 00:21:08.862 "params": { 00:21:08.862 "trtype": "TCP", 00:21:08.862 "max_queue_depth": 128, 00:21:08.862 "max_io_qpairs_per_ctrlr": 127, 00:21:08.862 "in_capsule_data_size": 4096, 00:21:08.862 "max_io_size": 131072, 00:21:08.862 "io_unit_size": 131072, 00:21:08.862 "max_aq_depth": 128, 00:21:08.862 "num_shared_buffers": 511, 00:21:08.862 "buf_cache_size": 4294967295, 00:21:08.862 "dif_insert_or_strip": false, 00:21:08.862 "zcopy": false, 00:21:08.862 "c2h_success": false, 00:21:08.862 "sock_priority": 0, 00:21:08.862 "abort_timeout_sec": 1, 00:21:08.862 "ack_timeout": 0, 00:21:08.862 "data_wr_pool_size": 0 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_create_subsystem", 00:21:08.862 "params": { 00:21:08.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.862 "allow_any_host": false, 00:21:08.862 "serial_number": "SPDK00000000000001", 00:21:08.862 "model_number": "SPDK bdev Controller", 00:21:08.862 "max_namespaces": 10, 00:21:08.862 "min_cntlid": 1, 00:21:08.862 "max_cntlid": 65519, 00:21:08.862 "ana_reporting": false 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_subsystem_add_host", 00:21:08.862 "params": { 00:21:08.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.862 "host": "nqn.2016-06.io.spdk:host1", 00:21:08.862 "psk": "/tmp/tmp.PAxKqfZ8FO" 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_subsystem_add_ns", 00:21:08.862 "params": { 00:21:08.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.862 "namespace": { 00:21:08.862 "nsid": 1, 00:21:08.862 "bdev_name": "malloc0", 00:21:08.862 "nguid": "E75C6B57D6F24698B0D6C9DC03A7FD39", 00:21:08.862 "uuid": "e75c6b57-d6f2-4698-b0d6-c9dc03a7fd39", 00:21:08.862 "no_auto_visible": false 00:21:08.862 } 00:21:08.862 } 00:21:08.862 }, 00:21:08.862 { 00:21:08.862 "method": "nvmf_subsystem_add_listener", 00:21:08.862 "params": { 00:21:08.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.862 "listen_address": { 00:21:08.862 "trtype": "TCP", 00:21:08.862 "adrfam": "IPv4", 00:21:08.862 "traddr": "10.0.0.2", 00:21:08.862 "trsvcid": "4420" 00:21:08.862 }, 00:21:08.862 "secure_channel": true 00:21:08.862 } 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 } 00:21:08.862 ] 00:21:08.862 }' 00:21:08.862 13:06:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=728428 00:21:08.862 13:06:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 728428 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 728428 ']' 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.863 13:06:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.122 [2024-07-15 13:06:30.687400] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:09.122 [2024-07-15 13:06:30.687458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.122 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.122 [2024-07-15 13:06:30.775654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.122 [2024-07-15 13:06:30.830347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.122 [2024-07-15 13:06:30.830381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.122 [2024-07-15 13:06:30.830387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.122 [2024-07-15 13:06:30.830394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.122 [2024-07-15 13:06:30.830398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.122 [2024-07-15 13:06:30.830444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.382 [2024-07-15 13:06:31.014195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.383 [2024-07-15 13:06:31.030169] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:09.383 [2024-07-15 13:06:31.046215] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:09.383 [2024-07-15 13:06:31.060395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.644 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.644 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:09.644 13:06:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.644 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:09.644 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=728650 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 728650 /var/tmp/bdevperf.sock 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 728650 ']' 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.906 13:06:31 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:09.906 "subsystems": [ 00:21:09.906 { 00:21:09.906 "subsystem": "keyring", 00:21:09.906 "config": [] 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "subsystem": "iobuf", 00:21:09.906 "config": [ 00:21:09.906 { 00:21:09.906 "method": "iobuf_set_options", 00:21:09.906 "params": { 00:21:09.906 "small_pool_count": 8192, 00:21:09.906 "large_pool_count": 1024, 00:21:09.906 "small_bufsize": 8192, 00:21:09.906 "large_bufsize": 135168 00:21:09.906 } 00:21:09.906 } 00:21:09.906 ] 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "subsystem": "sock", 00:21:09.906 "config": [ 00:21:09.906 { 00:21:09.906 "method": "sock_set_default_impl", 00:21:09.906 "params": { 00:21:09.906 "impl_name": "posix" 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "sock_impl_set_options", 00:21:09.906 "params": { 00:21:09.906 "impl_name": "ssl", 00:21:09.906 "recv_buf_size": 4096, 00:21:09.906 "send_buf_size": 4096, 00:21:09.906 "enable_recv_pipe": true, 00:21:09.906 "enable_quickack": false, 00:21:09.906 "enable_placement_id": 0, 00:21:09.906 "enable_zerocopy_send_server": true, 00:21:09.906 "enable_zerocopy_send_client": false, 00:21:09.906 "zerocopy_threshold": 0, 00:21:09.906 "tls_version": 0, 00:21:09.906 "enable_ktls": false 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "sock_impl_set_options", 00:21:09.906 "params": { 00:21:09.906 "impl_name": "posix", 00:21:09.906 "recv_buf_size": 2097152, 00:21:09.906 "send_buf_size": 2097152, 00:21:09.906 "enable_recv_pipe": true, 00:21:09.906 "enable_quickack": false, 00:21:09.906 "enable_placement_id": 0, 00:21:09.906 "enable_zerocopy_send_server": true, 00:21:09.906 "enable_zerocopy_send_client": false, 00:21:09.906 "zerocopy_threshold": 0, 00:21:09.906 "tls_version": 0, 00:21:09.906 "enable_ktls": false 00:21:09.906 } 00:21:09.906 } 00:21:09.906 ] 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "subsystem": "vmd", 00:21:09.906 "config": [] 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "subsystem": "accel", 00:21:09.906 "config": [ 00:21:09.906 { 00:21:09.906 "method": "accel_set_options", 00:21:09.906 "params": { 00:21:09.906 "small_cache_size": 128, 00:21:09.906 "large_cache_size": 16, 00:21:09.906 "task_count": 2048, 00:21:09.906 "sequence_count": 2048, 00:21:09.906 "buf_count": 2048 00:21:09.906 } 00:21:09.906 } 00:21:09.906 ] 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "subsystem": "bdev", 00:21:09.906 "config": [ 00:21:09.906 { 00:21:09.906 "method": "bdev_set_options", 00:21:09.906 "params": { 00:21:09.906 "bdev_io_pool_size": 65535, 00:21:09.906 "bdev_io_cache_size": 256, 00:21:09.906 "bdev_auto_examine": true, 00:21:09.906 "iobuf_small_cache_size": 128, 00:21:09.906 "iobuf_large_cache_size": 16 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "bdev_raid_set_options", 00:21:09.906 "params": { 00:21:09.906 "process_window_size_kb": 1024 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "bdev_iscsi_set_options", 00:21:09.906 "params": { 00:21:09.906 "timeout_sec": 30 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "bdev_nvme_set_options", 00:21:09.906 "params": { 00:21:09.906 "action_on_timeout": "none", 00:21:09.906 "timeout_us": 0, 00:21:09.906 "timeout_admin_us": 0, 00:21:09.906 "keep_alive_timeout_ms": 10000, 00:21:09.906 "arbitration_burst": 0, 00:21:09.906 "low_priority_weight": 0, 00:21:09.906 "medium_priority_weight": 0, 00:21:09.906 "high_priority_weight": 0, 00:21:09.906 "nvme_adminq_poll_period_us": 10000, 00:21:09.906 "nvme_ioq_poll_period_us": 0, 00:21:09.906 "io_queue_requests": 512, 00:21:09.906 "delay_cmd_submit": true, 00:21:09.906 "transport_retry_count": 4, 00:21:09.906 "bdev_retry_count": 3, 00:21:09.906 "transport_ack_timeout": 0, 00:21:09.906 "ctrlr_loss_timeout_sec": 0, 00:21:09.906 "reconnect_delay_sec": 0, 00:21:09.906 "fast_io_fail_timeout_sec": 0, 00:21:09.906 "disable_auto_failback": false, 00:21:09.906 "generate_uuids": false, 00:21:09.906 "transport_tos": 0, 00:21:09.906 "nvme_error_stat": false, 00:21:09.906 "rdma_srq_size": 0, 00:21:09.906 "io_path_stat": false, 00:21:09.906 "allow_accel_sequence": false, 00:21:09.906 "rdma_max_cq_size": 0, 00:21:09.906 "rdma_cm_event_timeout_ms": 0, 00:21:09.906 "dhchap_digests": [ 00:21:09.906 "sha256", 00:21:09.906 "sha384", 00:21:09.906 "sha512" 00:21:09.906 ], 00:21:09.906 "dhchap_dhgroups": [ 00:21:09.906 "null", 00:21:09.906 "ffdhe2048", 00:21:09.906 "ffdhe3072", 00:21:09.906 "ffdhe4096", 00:21:09.906 "ffdhe6144", 00:21:09.906 "ffdhe8192" 00:21:09.906 ] 00:21:09.906 } 00:21:09.906 }, 00:21:09.906 { 00:21:09.906 "method": "bdev_nvme_attach_controller", 00:21:09.906 "params": { 00:21:09.906 "name": "TLSTEST", 00:21:09.906 "trtype": "TCP", 00:21:09.906 "adrfam": "IPv4", 00:21:09.906 "traddr": "10.0.0.2", 00:21:09.907 "trsvcid": "4420", 00:21:09.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.907 "prchk_reftag": false, 00:21:09.907 "prchk_guard": false, 00:21:09.907 "ctrlr_loss_timeout_sec": 0, 00:21:09.907 "reconnect_delay_sec": 0, 00:21:09.907 "fast_io_fail_timeout_sec": 0, 00:21:09.907 "psk": "/tmp/tmp.PAxKqfZ8FO", 00:21:09.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.907 "hdgst": false, 00:21:09.907 "ddgst": false 00:21:09.907 } 00:21:09.907 }, 00:21:09.907 { 00:21:09.907 "method": "bdev_nvme_set_hotplug", 00:21:09.907 "params": { 00:21:09.907 "period_us": 100000, 00:21:09.907 "enable": false 00:21:09.907 } 00:21:09.907 }, 00:21:09.907 { 00:21:09.907 "method": "bdev_wait_for_examine" 00:21:09.907 } 00:21:09.907 ] 00:21:09.907 }, 00:21:09.907 { 00:21:09.907 "subsystem": "nbd", 00:21:09.907 "config": [] 00:21:09.907 } 00:21:09.907 ] 00:21:09.907 }' 00:21:09.907 [2024-07-15 13:06:31.532371] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:09.907 [2024-07-15 13:06:31.532425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid728650 ] 00:21:09.907 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.907 [2024-07-15 13:06:31.588780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.907 [2024-07-15 13:06:31.640903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.168 [2024-07-15 13:06:31.765827] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.168 [2024-07-15 13:06:31.765891] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:10.739 13:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.739 13:06:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:10.739 13:06:32 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:10.739 Running I/O for 10 seconds... 00:21:20.745 00:21:20.745 Latency(us) 00:21:20.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.745 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.746 Verification LBA range: start 0x0 length 0x2000 00:21:20.746 TLSTESTn1 : 10.02 5214.68 20.37 0.00 0.00 24512.22 5789.01 74711.04 00:21:20.746 =================================================================================================================== 00:21:20.746 Total : 5214.68 20.37 0.00 0.00 24512.22 5789.01 74711.04 00:21:20.746 0 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 728650 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 728650 ']' 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 728650 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 728650 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 728650' 00:21:20.746 killing process with pid 728650 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 728650 00:21:20.746 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.746 00:21:20.746 Latency(us) 00:21:20.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.746 =================================================================================================================== 00:21:20.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.746 [2024-07-15 13:06:42.511785] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.746 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 728650 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 728428 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 728428 ']' 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 728428 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 728428 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 728428' 00:21:21.007 killing process with pid 728428 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 728428 00:21:21.007 [2024-07-15 13:06:42.679646] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 728428 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=730795 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 730795 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 730795 ']' 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.007 13:06:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.269 [2024-07-15 13:06:42.855328] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:21.269 [2024-07-15 13:06:42.855382] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.269 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.269 [2024-07-15 13:06:42.928538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.269 [2024-07-15 13:06:42.991447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.269 [2024-07-15 13:06:42.991485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.269 [2024-07-15 13:06:42.991493] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.269 [2024-07-15 13:06:42.991500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.269 [2024-07-15 13:06:42.991506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.269 [2024-07-15 13:06:42.991529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.839 13:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.839 13:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.839 13:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.839 13:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.839 13:06:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.100 13:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.100 13:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.PAxKqfZ8FO 00:21:22.100 13:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PAxKqfZ8FO 00:21:22.100 13:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.100 [2024-07-15 13:06:43.810579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.100 13:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:22.360 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.360 [2024-07-15 13:06:44.135386] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.360 [2024-07-15 13:06:44.135567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.360 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.621 malloc0 00:21:22.621 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PAxKqfZ8FO 00:21:22.882 [2024-07-15 13:06:44.635333] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=731218 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 731218 /var/tmp/bdevperf.sock 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 731218 ']' 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.882 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.883 13:06:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.883 [2024-07-15 13:06:44.701790] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:22.883 [2024-07-15 13:06:44.701842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731218 ] 00:21:23.143 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.144 [2024-07-15 13:06:44.783270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.144 [2024-07-15 13:06:44.837120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.715 13:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.715 13:06:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.715 13:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PAxKqfZ8FO 00:21:23.975 13:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:23.975 [2024-07-15 13:06:45.755738] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.236 nvme0n1 00:21:24.236 13:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:24.236 Running I/O for 1 seconds... 00:21:25.180 00:21:25.180 Latency(us) 00:21:25.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.180 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:25.180 Verification LBA range: start 0x0 length 0x2000 00:21:25.180 nvme0n1 : 1.02 3215.88 12.56 0.00 0.00 39481.80 5734.40 51336.53 00:21:25.180 =================================================================================================================== 00:21:25.180 Total : 3215.88 12.56 0.00 0.00 39481.80 5734.40 51336.53 00:21:25.180 0 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 731218 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 731218 ']' 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 731218 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.180 13:06:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 731218 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 731218' 00:21:25.442 killing process with pid 731218 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 731218 00:21:25.442 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.442 00:21:25.442 Latency(us) 00:21:25.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.442 =================================================================================================================== 00:21:25.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 731218 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 730795 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 730795 ']' 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 730795 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730795 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730795' 00:21:25.442 killing process with pid 730795 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 730795 00:21:25.442 [2024-07-15 13:06:47.176778] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.442 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 730795 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=731713 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 731713 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 731713 ']' 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.703 13:06:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.703 [2024-07-15 13:06:47.384599] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:25.703 [2024-07-15 13:06:47.384654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.703 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.703 [2024-07-15 13:06:47.458258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.703 [2024-07-15 13:06:47.520715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.703 [2024-07-15 13:06:47.520756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.703 [2024-07-15 13:06:47.520763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.703 [2024-07-15 13:06:47.520770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.703 [2024-07-15 13:06:47.520775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.703 [2024-07-15 13:06:47.520795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.647 [2024-07-15 13:06:48.179365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.647 malloc0 00:21:26.647 [2024-07-15 13:06:48.206373] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.647 [2024-07-15 13:06:48.206567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=731975 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 731975 /var/tmp/bdevperf.sock 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 731975 ']' 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.647 13:06:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.647 [2024-07-15 13:06:48.283729] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:26.647 [2024-07-15 13:06:48.283779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731975 ] 00:21:26.647 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.647 [2024-07-15 13:06:48.365603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.647 [2024-07-15 13:06:48.420239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.636 13:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.636 13:06:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:27.636 13:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PAxKqfZ8FO 00:21:27.636 13:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:27.636 [2024-07-15 13:06:49.362918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:27.636 nvme0n1 00:21:27.960 13:06:49 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:27.960 Running I/O for 1 seconds... 00:21:28.909 00:21:28.909 Latency(us) 00:21:28.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.909 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:28.909 Verification LBA range: start 0x0 length 0x2000 00:21:28.909 nvme0n1 : 1.02 3388.48 13.24 0.00 0.00 37446.31 5597.87 36481.71 00:21:28.909 =================================================================================================================== 00:21:28.909 Total : 3388.48 13.24 0.00 0.00 37446.31 5597.87 36481.71 00:21:28.909 0 00:21:28.909 13:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:28.909 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.909 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.909 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.909 13:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:28.909 "subsystems": [ 00:21:28.909 { 00:21:28.909 "subsystem": "keyring", 00:21:28.909 "config": [ 00:21:28.909 { 00:21:28.909 "method": "keyring_file_add_key", 00:21:28.909 "params": { 00:21:28.909 "name": "key0", 00:21:28.909 "path": "/tmp/tmp.PAxKqfZ8FO" 00:21:28.909 } 00:21:28.909 } 00:21:28.909 ] 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "subsystem": "iobuf", 00:21:28.909 "config": [ 00:21:28.909 { 00:21:28.909 "method": "iobuf_set_options", 00:21:28.909 "params": { 00:21:28.909 "small_pool_count": 8192, 00:21:28.909 "large_pool_count": 1024, 00:21:28.909 "small_bufsize": 8192, 00:21:28.909 "large_bufsize": 135168 00:21:28.909 } 00:21:28.909 } 00:21:28.909 ] 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "subsystem": "sock", 00:21:28.909 "config": [ 00:21:28.909 { 00:21:28.909 "method": "sock_set_default_impl", 00:21:28.909 "params": { 00:21:28.909 "impl_name": "posix" 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "sock_impl_set_options", 00:21:28.909 "params": { 00:21:28.909 "impl_name": "ssl", 00:21:28.909 "recv_buf_size": 4096, 00:21:28.909 "send_buf_size": 4096, 00:21:28.909 "enable_recv_pipe": true, 00:21:28.909 "enable_quickack": false, 00:21:28.909 "enable_placement_id": 0, 00:21:28.909 "enable_zerocopy_send_server": true, 00:21:28.909 "enable_zerocopy_send_client": false, 00:21:28.909 "zerocopy_threshold": 0, 00:21:28.909 "tls_version": 0, 00:21:28.909 "enable_ktls": false 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "sock_impl_set_options", 00:21:28.909 "params": { 00:21:28.909 "impl_name": "posix", 00:21:28.909 "recv_buf_size": 2097152, 00:21:28.909 "send_buf_size": 2097152, 00:21:28.909 "enable_recv_pipe": true, 00:21:28.909 "enable_quickack": false, 00:21:28.909 "enable_placement_id": 0, 00:21:28.909 "enable_zerocopy_send_server": true, 00:21:28.909 "enable_zerocopy_send_client": false, 00:21:28.909 "zerocopy_threshold": 0, 00:21:28.909 "tls_version": 0, 00:21:28.909 "enable_ktls": false 00:21:28.909 } 00:21:28.909 } 00:21:28.909 ] 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "subsystem": "vmd", 00:21:28.909 "config": [] 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "subsystem": "accel", 00:21:28.909 "config": [ 00:21:28.909 { 00:21:28.909 "method": "accel_set_options", 00:21:28.909 "params": { 00:21:28.909 "small_cache_size": 128, 00:21:28.909 "large_cache_size": 16, 00:21:28.909 "task_count": 2048, 00:21:28.909 "sequence_count": 2048, 00:21:28.909 "buf_count": 2048 00:21:28.909 } 00:21:28.909 } 00:21:28.909 ] 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "subsystem": "bdev", 00:21:28.909 "config": [ 00:21:28.909 { 00:21:28.909 "method": "bdev_set_options", 00:21:28.909 "params": { 00:21:28.909 "bdev_io_pool_size": 65535, 00:21:28.909 "bdev_io_cache_size": 256, 00:21:28.909 "bdev_auto_examine": true, 00:21:28.909 "iobuf_small_cache_size": 128, 00:21:28.909 "iobuf_large_cache_size": 16 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "bdev_raid_set_options", 00:21:28.909 "params": { 00:21:28.909 "process_window_size_kb": 1024 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "bdev_iscsi_set_options", 00:21:28.909 "params": { 00:21:28.909 "timeout_sec": 30 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "bdev_nvme_set_options", 00:21:28.909 "params": { 00:21:28.909 "action_on_timeout": "none", 00:21:28.909 "timeout_us": 0, 00:21:28.909 "timeout_admin_us": 0, 00:21:28.909 "keep_alive_timeout_ms": 10000, 00:21:28.909 "arbitration_burst": 0, 00:21:28.909 "low_priority_weight": 0, 00:21:28.909 "medium_priority_weight": 0, 00:21:28.909 "high_priority_weight": 0, 00:21:28.909 "nvme_adminq_poll_period_us": 10000, 00:21:28.909 "nvme_ioq_poll_period_us": 0, 00:21:28.909 "io_queue_requests": 0, 00:21:28.909 "delay_cmd_submit": true, 00:21:28.909 "transport_retry_count": 4, 00:21:28.909 "bdev_retry_count": 3, 00:21:28.909 "transport_ack_timeout": 0, 00:21:28.909 "ctrlr_loss_timeout_sec": 0, 00:21:28.909 "reconnect_delay_sec": 0, 00:21:28.909 "fast_io_fail_timeout_sec": 0, 00:21:28.909 "disable_auto_failback": false, 00:21:28.909 "generate_uuids": false, 00:21:28.909 "transport_tos": 0, 00:21:28.909 "nvme_error_stat": false, 00:21:28.909 "rdma_srq_size": 0, 00:21:28.909 "io_path_stat": false, 00:21:28.909 "allow_accel_sequence": false, 00:21:28.909 "rdma_max_cq_size": 0, 00:21:28.909 "rdma_cm_event_timeout_ms": 0, 00:21:28.909 "dhchap_digests": [ 00:21:28.909 "sha256", 00:21:28.909 "sha384", 00:21:28.909 "sha512" 00:21:28.909 ], 00:21:28.909 "dhchap_dhgroups": [ 00:21:28.909 "null", 00:21:28.909 "ffdhe2048", 00:21:28.909 "ffdhe3072", 00:21:28.909 "ffdhe4096", 00:21:28.909 "ffdhe6144", 00:21:28.909 "ffdhe8192" 00:21:28.909 ] 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "bdev_nvme_set_hotplug", 00:21:28.909 "params": { 00:21:28.909 "period_us": 100000, 00:21:28.909 "enable": false 00:21:28.909 } 00:21:28.909 }, 00:21:28.909 { 00:21:28.909 "method": "bdev_malloc_create", 00:21:28.909 "params": { 00:21:28.909 "name": "malloc0", 00:21:28.909 "num_blocks": 8192, 00:21:28.909 "block_size": 4096, 00:21:28.910 "physical_block_size": 4096, 00:21:28.910 "uuid": "8be6809c-c542-4d9a-8319-08350cfc7cd5", 00:21:28.910 "optimal_io_boundary": 0 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "bdev_wait_for_examine" 00:21:28.910 } 00:21:28.910 ] 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "subsystem": "nbd", 00:21:28.910 "config": [] 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "subsystem": "scheduler", 00:21:28.910 "config": [ 00:21:28.910 { 00:21:28.910 "method": "framework_set_scheduler", 00:21:28.910 "params": { 00:21:28.910 "name": "static" 00:21:28.910 } 00:21:28.910 } 00:21:28.910 ] 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "subsystem": "nvmf", 00:21:28.910 "config": [ 00:21:28.910 { 00:21:28.910 "method": "nvmf_set_config", 00:21:28.910 "params": { 00:21:28.910 "discovery_filter": "match_any", 00:21:28.910 "admin_cmd_passthru": { 00:21:28.910 "identify_ctrlr": false 00:21:28.910 } 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_set_max_subsystems", 00:21:28.910 "params": { 00:21:28.910 "max_subsystems": 1024 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_set_crdt", 00:21:28.910 "params": { 00:21:28.910 "crdt1": 0, 00:21:28.910 "crdt2": 0, 00:21:28.910 "crdt3": 0 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_create_transport", 00:21:28.910 "params": { 00:21:28.910 "trtype": "TCP", 00:21:28.910 "max_queue_depth": 128, 00:21:28.910 "max_io_qpairs_per_ctrlr": 127, 00:21:28.910 "in_capsule_data_size": 4096, 00:21:28.910 "max_io_size": 131072, 00:21:28.910 "io_unit_size": 131072, 00:21:28.910 "max_aq_depth": 128, 00:21:28.910 "num_shared_buffers": 511, 00:21:28.910 "buf_cache_size": 4294967295, 00:21:28.910 "dif_insert_or_strip": false, 00:21:28.910 "zcopy": false, 00:21:28.910 "c2h_success": false, 00:21:28.910 "sock_priority": 0, 00:21:28.910 "abort_timeout_sec": 1, 00:21:28.910 "ack_timeout": 0, 00:21:28.910 "data_wr_pool_size": 0 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_create_subsystem", 00:21:28.910 "params": { 00:21:28.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.910 "allow_any_host": false, 00:21:28.910 "serial_number": "00000000000000000000", 00:21:28.910 "model_number": "SPDK bdev Controller", 00:21:28.910 "max_namespaces": 32, 00:21:28.910 "min_cntlid": 1, 00:21:28.910 "max_cntlid": 65519, 00:21:28.910 "ana_reporting": false 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_subsystem_add_host", 00:21:28.910 "params": { 00:21:28.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.910 "host": "nqn.2016-06.io.spdk:host1", 00:21:28.910 "psk": "key0" 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_subsystem_add_ns", 00:21:28.910 "params": { 00:21:28.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.910 "namespace": { 00:21:28.910 "nsid": 1, 00:21:28.910 "bdev_name": "malloc0", 00:21:28.910 "nguid": "8BE6809CC5424D9A831908350CFC7CD5", 00:21:28.910 "uuid": "8be6809c-c542-4d9a-8319-08350cfc7cd5", 00:21:28.910 "no_auto_visible": false 00:21:28.910 } 00:21:28.910 } 00:21:28.910 }, 00:21:28.910 { 00:21:28.910 "method": "nvmf_subsystem_add_listener", 00:21:28.910 "params": { 00:21:28.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.910 "listen_address": { 00:21:28.910 "trtype": "TCP", 00:21:28.910 "adrfam": "IPv4", 00:21:28.910 "traddr": "10.0.0.2", 00:21:28.910 "trsvcid": "4420" 00:21:28.910 }, 00:21:28.910 "secure_channel": true 00:21:28.910 } 00:21:28.910 } 00:21:28.910 ] 00:21:28.910 } 00:21:28.910 ] 00:21:28.910 }' 00:21:28.910 13:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:29.170 13:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:29.170 "subsystems": [ 00:21:29.170 { 00:21:29.170 "subsystem": "keyring", 00:21:29.170 "config": [ 00:21:29.170 { 00:21:29.170 "method": "keyring_file_add_key", 00:21:29.170 "params": { 00:21:29.170 "name": "key0", 00:21:29.170 "path": "/tmp/tmp.PAxKqfZ8FO" 00:21:29.170 } 00:21:29.170 } 00:21:29.170 ] 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "subsystem": "iobuf", 00:21:29.170 "config": [ 00:21:29.170 { 00:21:29.170 "method": "iobuf_set_options", 00:21:29.170 "params": { 00:21:29.170 "small_pool_count": 8192, 00:21:29.170 "large_pool_count": 1024, 00:21:29.170 "small_bufsize": 8192, 00:21:29.170 "large_bufsize": 135168 00:21:29.170 } 00:21:29.170 } 00:21:29.170 ] 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "subsystem": "sock", 00:21:29.170 "config": [ 00:21:29.170 { 00:21:29.170 "method": "sock_set_default_impl", 00:21:29.170 "params": { 00:21:29.170 "impl_name": "posix" 00:21:29.170 } 00:21:29.170 }, 00:21:29.170 { 00:21:29.170 "method": "sock_impl_set_options", 00:21:29.170 "params": { 00:21:29.170 "impl_name": "ssl", 00:21:29.170 "recv_buf_size": 4096, 00:21:29.170 "send_buf_size": 4096, 00:21:29.170 "enable_recv_pipe": true, 00:21:29.171 "enable_quickack": false, 00:21:29.171 "enable_placement_id": 0, 00:21:29.171 "enable_zerocopy_send_server": true, 00:21:29.171 "enable_zerocopy_send_client": false, 00:21:29.171 "zerocopy_threshold": 0, 00:21:29.171 "tls_version": 0, 00:21:29.171 "enable_ktls": false 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "sock_impl_set_options", 00:21:29.171 "params": { 00:21:29.171 "impl_name": "posix", 00:21:29.171 "recv_buf_size": 2097152, 00:21:29.171 "send_buf_size": 2097152, 00:21:29.171 "enable_recv_pipe": true, 00:21:29.171 "enable_quickack": false, 00:21:29.171 "enable_placement_id": 0, 00:21:29.171 "enable_zerocopy_send_server": true, 00:21:29.171 "enable_zerocopy_send_client": false, 00:21:29.171 "zerocopy_threshold": 0, 00:21:29.171 "tls_version": 0, 00:21:29.171 "enable_ktls": false 00:21:29.171 } 00:21:29.171 } 00:21:29.171 ] 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "subsystem": "vmd", 00:21:29.171 "config": [] 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "subsystem": "accel", 00:21:29.171 "config": [ 00:21:29.171 { 00:21:29.171 "method": "accel_set_options", 00:21:29.171 "params": { 00:21:29.171 "small_cache_size": 128, 00:21:29.171 "large_cache_size": 16, 00:21:29.171 "task_count": 2048, 00:21:29.171 "sequence_count": 2048, 00:21:29.171 "buf_count": 2048 00:21:29.171 } 00:21:29.171 } 00:21:29.171 ] 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "subsystem": "bdev", 00:21:29.171 "config": [ 00:21:29.171 { 00:21:29.171 "method": "bdev_set_options", 00:21:29.171 "params": { 00:21:29.171 "bdev_io_pool_size": 65535, 00:21:29.171 "bdev_io_cache_size": 256, 00:21:29.171 "bdev_auto_examine": true, 00:21:29.171 "iobuf_small_cache_size": 128, 00:21:29.171 "iobuf_large_cache_size": 16 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_raid_set_options", 00:21:29.171 "params": { 00:21:29.171 "process_window_size_kb": 1024 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_iscsi_set_options", 00:21:29.171 "params": { 00:21:29.171 "timeout_sec": 30 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_nvme_set_options", 00:21:29.171 "params": { 00:21:29.171 "action_on_timeout": "none", 00:21:29.171 "timeout_us": 0, 00:21:29.171 "timeout_admin_us": 0, 00:21:29.171 "keep_alive_timeout_ms": 10000, 00:21:29.171 "arbitration_burst": 0, 00:21:29.171 "low_priority_weight": 0, 00:21:29.171 "medium_priority_weight": 0, 00:21:29.171 "high_priority_weight": 0, 00:21:29.171 "nvme_adminq_poll_period_us": 10000, 00:21:29.171 "nvme_ioq_poll_period_us": 0, 00:21:29.171 "io_queue_requests": 512, 00:21:29.171 "delay_cmd_submit": true, 00:21:29.171 "transport_retry_count": 4, 00:21:29.171 "bdev_retry_count": 3, 00:21:29.171 "transport_ack_timeout": 0, 00:21:29.171 "ctrlr_loss_timeout_sec": 0, 00:21:29.171 "reconnect_delay_sec": 0, 00:21:29.171 "fast_io_fail_timeout_sec": 0, 00:21:29.171 "disable_auto_failback": false, 00:21:29.171 "generate_uuids": false, 00:21:29.171 "transport_tos": 0, 00:21:29.171 "nvme_error_stat": false, 00:21:29.171 "rdma_srq_size": 0, 00:21:29.171 "io_path_stat": false, 00:21:29.171 "allow_accel_sequence": false, 00:21:29.171 "rdma_max_cq_size": 0, 00:21:29.171 "rdma_cm_event_timeout_ms": 0, 00:21:29.171 "dhchap_digests": [ 00:21:29.171 "sha256", 00:21:29.171 "sha384", 00:21:29.171 "sha512" 00:21:29.171 ], 00:21:29.171 "dhchap_dhgroups": [ 00:21:29.171 "null", 00:21:29.171 "ffdhe2048", 00:21:29.171 "ffdhe3072", 00:21:29.171 "ffdhe4096", 00:21:29.171 "ffdhe6144", 00:21:29.171 "ffdhe8192" 00:21:29.171 ] 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_nvme_attach_controller", 00:21:29.171 "params": { 00:21:29.171 "name": "nvme0", 00:21:29.171 "trtype": "TCP", 00:21:29.171 "adrfam": "IPv4", 00:21:29.171 "traddr": "10.0.0.2", 00:21:29.171 "trsvcid": "4420", 00:21:29.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.171 "prchk_reftag": false, 00:21:29.171 "prchk_guard": false, 00:21:29.171 "ctrlr_loss_timeout_sec": 0, 00:21:29.171 "reconnect_delay_sec": 0, 00:21:29.171 "fast_io_fail_timeout_sec": 0, 00:21:29.171 "psk": "key0", 00:21:29.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.171 "hdgst": false, 00:21:29.171 "ddgst": false 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_nvme_set_hotplug", 00:21:29.171 "params": { 00:21:29.171 "period_us": 100000, 00:21:29.171 "enable": false 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_enable_histogram", 00:21:29.171 "params": { 00:21:29.171 "name": "nvme0n1", 00:21:29.171 "enable": true 00:21:29.171 } 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "method": "bdev_wait_for_examine" 00:21:29.171 } 00:21:29.171 ] 00:21:29.171 }, 00:21:29.171 { 00:21:29.171 "subsystem": "nbd", 00:21:29.171 "config": [] 00:21:29.171 } 00:21:29.171 ] 00:21:29.171 }' 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 731975 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 731975 ']' 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 731975 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 731975 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 731975' 00:21:29.171 killing process with pid 731975 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 731975 00:21:29.171 Received shutdown signal, test time was about 1.000000 seconds 00:21:29.171 00:21:29.171 Latency(us) 00:21:29.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.171 =================================================================================================================== 00:21:29.171 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.171 13:06:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 731975 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 731713 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 731713 ']' 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 731713 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 731713 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 731713' 00:21:29.431 killing process with pid 731713 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 731713 00:21:29.431 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 731713 00:21:29.691 13:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:29.691 13:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.691 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.691 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.691 13:06:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:29.691 "subsystems": [ 00:21:29.691 { 00:21:29.691 "subsystem": "keyring", 00:21:29.691 "config": [ 00:21:29.691 { 00:21:29.691 "method": "keyring_file_add_key", 00:21:29.691 "params": { 00:21:29.691 "name": "key0", 00:21:29.691 "path": "/tmp/tmp.PAxKqfZ8FO" 00:21:29.691 } 00:21:29.691 } 00:21:29.691 ] 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "subsystem": "iobuf", 00:21:29.691 "config": [ 00:21:29.691 { 00:21:29.691 "method": "iobuf_set_options", 00:21:29.691 "params": { 00:21:29.691 "small_pool_count": 8192, 00:21:29.691 "large_pool_count": 1024, 00:21:29.691 "small_bufsize": 8192, 00:21:29.691 "large_bufsize": 135168 00:21:29.691 } 00:21:29.691 } 00:21:29.691 ] 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "subsystem": "sock", 00:21:29.691 "config": [ 00:21:29.691 { 00:21:29.691 "method": "sock_set_default_impl", 00:21:29.691 "params": { 00:21:29.691 "impl_name": "posix" 00:21:29.691 } 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "method": "sock_impl_set_options", 00:21:29.691 "params": { 00:21:29.691 "impl_name": "ssl", 00:21:29.691 "recv_buf_size": 4096, 00:21:29.691 "send_buf_size": 4096, 00:21:29.691 "enable_recv_pipe": true, 00:21:29.691 "enable_quickack": false, 00:21:29.691 "enable_placement_id": 0, 00:21:29.691 "enable_zerocopy_send_server": true, 00:21:29.691 "enable_zerocopy_send_client": false, 00:21:29.691 "zerocopy_threshold": 0, 00:21:29.691 "tls_version": 0, 00:21:29.691 "enable_ktls": false 00:21:29.691 } 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "method": "sock_impl_set_options", 00:21:29.691 "params": { 00:21:29.691 "impl_name": "posix", 00:21:29.691 "recv_buf_size": 2097152, 00:21:29.691 "send_buf_size": 2097152, 00:21:29.691 "enable_recv_pipe": true, 00:21:29.691 "enable_quickack": false, 00:21:29.691 "enable_placement_id": 0, 00:21:29.691 "enable_zerocopy_send_server": true, 00:21:29.691 "enable_zerocopy_send_client": false, 00:21:29.691 "zerocopy_threshold": 0, 00:21:29.691 "tls_version": 0, 00:21:29.691 "enable_ktls": false 00:21:29.691 } 00:21:29.691 } 00:21:29.691 ] 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "subsystem": "vmd", 00:21:29.691 "config": [] 00:21:29.691 }, 00:21:29.691 { 00:21:29.691 "subsystem": "accel", 00:21:29.691 "config": [ 00:21:29.691 { 00:21:29.691 "method": "accel_set_options", 00:21:29.691 "params": { 00:21:29.692 "small_cache_size": 128, 00:21:29.692 "large_cache_size": 16, 00:21:29.692 "task_count": 2048, 00:21:29.692 "sequence_count": 2048, 00:21:29.692 "buf_count": 2048 00:21:29.692 } 00:21:29.692 } 00:21:29.692 ] 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "subsystem": "bdev", 00:21:29.692 "config": [ 00:21:29.692 { 00:21:29.692 "method": "bdev_set_options", 00:21:29.692 "params": { 00:21:29.692 "bdev_io_pool_size": 65535, 00:21:29.692 "bdev_io_cache_size": 256, 00:21:29.692 "bdev_auto_examine": true, 00:21:29.692 "iobuf_small_cache_size": 128, 00:21:29.692 "iobuf_large_cache_size": 16 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_raid_set_options", 00:21:29.692 "params": { 00:21:29.692 "process_window_size_kb": 1024 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_iscsi_set_options", 00:21:29.692 "params": { 00:21:29.692 "timeout_sec": 30 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_nvme_set_options", 00:21:29.692 "params": { 00:21:29.692 "action_on_timeout": "none", 00:21:29.692 "timeout_us": 0, 00:21:29.692 "timeout_admin_us": 0, 00:21:29.692 "keep_alive_timeout_ms": 10000, 00:21:29.692 "arbitration_burst": 0, 00:21:29.692 "low_priority_weight": 0, 00:21:29.692 "medium_priority_weight": 0, 00:21:29.692 "high_priority_weight": 0, 00:21:29.692 "nvme_adminq_poll_period_us": 10000, 00:21:29.692 "nvme_ioq_poll_period_us": 0, 00:21:29.692 "io_queue_requests": 0, 00:21:29.692 "delay_cmd_submit": true, 00:21:29.692 "transport_retry_count": 4, 00:21:29.692 "bdev_retry_count": 3, 00:21:29.692 "transport_ack_timeout": 0, 00:21:29.692 "ctrlr_loss_timeout_sec": 0, 00:21:29.692 "reconnect_delay_sec": 0, 00:21:29.692 "fast_io_fail_timeout_sec": 0, 00:21:29.692 "disable_auto_failback": false, 00:21:29.692 "generate_uuids": false, 00:21:29.692 "transport_tos": 0, 00:21:29.692 "nvme_error_stat": false, 00:21:29.692 "rdma_srq_size": 0, 00:21:29.692 "io_path_stat": false, 00:21:29.692 "allow_accel_sequence": false, 00:21:29.692 "rdma_max_cq_size": 0, 00:21:29.692 "rdma_cm_event_timeout_ms": 0, 00:21:29.692 "dhchap_digests": [ 00:21:29.692 "sha256", 00:21:29.692 "sha384", 00:21:29.692 "sha512" 00:21:29.692 ], 00:21:29.692 "dhchap_dhgroups": [ 00:21:29.692 "null", 00:21:29.692 "ffdhe2048", 00:21:29.692 "ffdhe3072", 00:21:29.692 "ffdhe4096", 00:21:29.692 "ffdhe6144", 00:21:29.692 "ffdhe8192" 00:21:29.692 ] 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_nvme_set_hotplug", 00:21:29.692 "params": { 00:21:29.692 "period_us": 100000, 00:21:29.692 "enable": false 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_malloc_create", 00:21:29.692 "params": { 00:21:29.692 "name": "malloc0", 00:21:29.692 "num_blocks": 8192, 00:21:29.692 "block_size": 4096, 00:21:29.692 "physical_block_size": 4096, 00:21:29.692 "uuid": "8be6809c-c542-4d9a-8319-08350cfc7cd5", 00:21:29.692 "optimal_io_boundary": 0 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "bdev_wait_for_examine" 00:21:29.692 } 00:21:29.692 ] 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "subsystem": "nbd", 00:21:29.692 "config": [] 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "subsystem": "scheduler", 00:21:29.692 "config": [ 00:21:29.692 { 00:21:29.692 "method": "framework_set_scheduler", 00:21:29.692 "params": { 00:21:29.692 "name": "static" 00:21:29.692 } 00:21:29.692 } 00:21:29.692 ] 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "subsystem": "nvmf", 00:21:29.692 "config": [ 00:21:29.692 { 00:21:29.692 "method": "nvmf_set_config", 00:21:29.692 "params": { 00:21:29.692 "discovery_filter": "match_any", 00:21:29.692 "admin_cmd_passthru": { 00:21:29.692 "identify_ctrlr": false 00:21:29.692 } 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_set_max_subsystems", 00:21:29.692 "params": { 00:21:29.692 "max_subsystems": 1024 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_set_crdt", 00:21:29.692 "params": { 00:21:29.692 "crdt1": 0, 00:21:29.692 "crdt2": 0, 00:21:29.692 "crdt3": 0 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_create_transport", 00:21:29.692 "params": { 00:21:29.692 "trtype": "TCP", 00:21:29.692 "max_queue_depth": 128, 00:21:29.692 "max_io_qpairs_per_ctrlr": 127, 00:21:29.692 "in_capsule_data_size": 4096, 00:21:29.692 "max_io_size": 131072, 00:21:29.692 "io_unit_size": 131072, 00:21:29.692 "max_aq_depth": 128, 00:21:29.692 "num_shared_buffers": 511, 00:21:29.692 "buf_cache_size": 4294967295, 00:21:29.692 "dif_insert_or_strip": false, 00:21:29.692 "zcopy": false, 00:21:29.692 "c2h_success": false, 00:21:29.692 "sock_priority": 0, 00:21:29.692 "abort_timeout_sec": 1, 00:21:29.692 "ack_timeout": 0, 00:21:29.692 "data_wr_pool_size": 0 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_create_subsystem", 00:21:29.692 "params": { 00:21:29.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.692 "allow_any_host": false, 00:21:29.692 "serial_number": "00000000000000000000", 00:21:29.692 "model_number": "SPDK bdev Controller", 00:21:29.692 "max_namespaces": 32, 00:21:29.692 "min_cntlid": 1, 00:21:29.692 "max_cntlid": 65519, 00:21:29.692 "ana_reporting": false 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_subsystem_add_host", 00:21:29.692 "params": { 00:21:29.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.692 "host": "nqn.2016-06.io.spdk:host1", 00:21:29.692 "psk": "key0" 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_subsystem_add_ns", 00:21:29.692 "params": { 00:21:29.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.692 "namespace": { 00:21:29.692 "nsid": 1, 00:21:29.692 "bdev_name": "malloc0", 00:21:29.692 "nguid": "8BE6809CC5424D9A831908350CFC7CD5", 00:21:29.692 "uuid": "8be6809c-c542-4d9a-8319-08350cfc7cd5", 00:21:29.692 "no_auto_visible": false 00:21:29.692 } 00:21:29.692 } 00:21:29.692 }, 00:21:29.692 { 00:21:29.692 "method": "nvmf_subsystem_add_listener", 00:21:29.692 "params": { 00:21:29.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.692 "listen_address": { 00:21:29.692 "trtype": "TCP", 00:21:29.692 "adrfam": "IPv4", 00:21:29.692 "traddr": "10.0.0.2", 00:21:29.692 "trsvcid": "4420" 00:21:29.692 }, 00:21:29.692 "secure_channel": true 00:21:29.692 } 00:21:29.692 } 00:21:29.692 ] 00:21:29.692 } 00:21:29.692 ] 00:21:29.692 }' 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=732467 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 732467 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 732467 ']' 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.692 13:06:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.692 [2024-07-15 13:06:51.348187] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:29.692 [2024-07-15 13:06:51.348246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.692 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.692 [2024-07-15 13:06:51.422524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.692 [2024-07-15 13:06:51.486422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.692 [2024-07-15 13:06:51.486461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.692 [2024-07-15 13:06:51.486469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.692 [2024-07-15 13:06:51.486475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.692 [2024-07-15 13:06:51.486481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.692 [2024-07-15 13:06:51.486537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.951 [2024-07-15 13:06:51.683702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.951 [2024-07-15 13:06:51.715713] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.951 [2024-07-15 13:06:51.725408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=732778 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 732778 /var/tmp/bdevperf.sock 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 732778 ']' 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.522 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:30.522 "subsystems": [ 00:21:30.522 { 00:21:30.522 "subsystem": "keyring", 00:21:30.522 "config": [ 00:21:30.522 { 00:21:30.522 "method": "keyring_file_add_key", 00:21:30.522 "params": { 00:21:30.522 "name": "key0", 00:21:30.522 "path": "/tmp/tmp.PAxKqfZ8FO" 00:21:30.522 } 00:21:30.522 } 00:21:30.522 ] 00:21:30.522 }, 00:21:30.523 { 00:21:30.523 "subsystem": "iobuf", 00:21:30.523 "config": [ 00:21:30.523 { 00:21:30.523 "method": "iobuf_set_options", 00:21:30.523 "params": { 00:21:30.523 "small_pool_count": 8192, 00:21:30.523 "large_pool_count": 1024, 00:21:30.523 "small_bufsize": 8192, 00:21:30.523 "large_bufsize": 135168 00:21:30.523 } 00:21:30.523 } 00:21:30.523 ] 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "subsystem": "sock", 00:21:30.523 "config": [ 00:21:30.523 { 00:21:30.523 "method": "sock_set_default_impl", 00:21:30.523 "params": { 00:21:30.523 "impl_name": "posix" 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "sock_impl_set_options", 00:21:30.523 "params": { 00:21:30.523 "impl_name": "ssl", 00:21:30.523 "recv_buf_size": 4096, 00:21:30.523 "send_buf_size": 4096, 00:21:30.523 "enable_recv_pipe": true, 00:21:30.523 "enable_quickack": false, 00:21:30.523 "enable_placement_id": 0, 00:21:30.523 "enable_zerocopy_send_server": true, 00:21:30.523 "enable_zerocopy_send_client": false, 00:21:30.523 "zerocopy_threshold": 0, 00:21:30.523 "tls_version": 0, 00:21:30.523 "enable_ktls": false 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "sock_impl_set_options", 00:21:30.523 "params": { 00:21:30.523 "impl_name": "posix", 00:21:30.523 "recv_buf_size": 2097152, 00:21:30.523 "send_buf_size": 2097152, 00:21:30.523 "enable_recv_pipe": true, 00:21:30.523 "enable_quickack": false, 00:21:30.523 "enable_placement_id": 0, 00:21:30.523 "enable_zerocopy_send_server": true, 00:21:30.523 "enable_zerocopy_send_client": false, 00:21:30.523 "zerocopy_threshold": 0, 00:21:30.523 "tls_version": 0, 00:21:30.523 "enable_ktls": false 00:21:30.523 } 00:21:30.523 } 00:21:30.523 ] 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "subsystem": "vmd", 00:21:30.523 "config": [] 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "subsystem": "accel", 00:21:30.523 "config": [ 00:21:30.523 { 00:21:30.523 "method": "accel_set_options", 00:21:30.523 "params": { 00:21:30.523 "small_cache_size": 128, 00:21:30.523 "large_cache_size": 16, 00:21:30.523 "task_count": 2048, 00:21:30.523 "sequence_count": 2048, 00:21:30.523 "buf_count": 2048 00:21:30.523 } 00:21:30.523 } 00:21:30.523 ] 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "subsystem": "bdev", 00:21:30.523 "config": [ 00:21:30.523 { 00:21:30.523 "method": "bdev_set_options", 00:21:30.523 "params": { 00:21:30.523 "bdev_io_pool_size": 65535, 00:21:30.523 "bdev_io_cache_size": 256, 00:21:30.523 "bdev_auto_examine": true, 00:21:30.523 "iobuf_small_cache_size": 128, 00:21:30.523 "iobuf_large_cache_size": 16 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_raid_set_options", 00:21:30.523 "params": { 00:21:30.523 "process_window_size_kb": 1024 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_iscsi_set_options", 00:21:30.523 "params": { 00:21:30.523 "timeout_sec": 30 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_nvme_set_options", 00:21:30.523 "params": { 00:21:30.523 "action_on_timeout": "none", 00:21:30.523 "timeout_us": 0, 00:21:30.523 "timeout_admin_us": 0, 00:21:30.523 "keep_alive_timeout_ms": 10000, 00:21:30.523 "arbitration_burst": 0, 00:21:30.523 "low_priority_weight": 0, 00:21:30.523 "medium_priority_weight": 0, 00:21:30.523 "high_priority_weight": 0, 00:21:30.523 "nvme_adminq_poll_period_us": 10000, 00:21:30.523 "nvme_ioq_poll_period_us": 0, 00:21:30.523 "io_queue_requests": 512, 00:21:30.523 "delay_cmd_submit": true, 00:21:30.523 "transport_retry_count": 4, 00:21:30.523 "bdev_retry_count": 3, 00:21:30.523 "transport_ack_timeout": 0, 00:21:30.523 "ctrlr_loss_timeout_sec": 0, 00:21:30.523 "reconnect_delay_sec": 0, 00:21:30.523 "fast_io_fail_timeout_sec": 0, 00:21:30.523 "disable_auto_failback": false, 00:21:30.523 "generate_uuids": false, 00:21:30.523 "transport_tos": 0, 00:21:30.523 "nvme_error_stat": false, 00:21:30.523 "rdma_srq_size": 0, 00:21:30.523 "io_path_stat": false, 00:21:30.523 "allow_accel_sequence": false, 00:21:30.523 "rdma_max_cq_size": 0, 00:21:30.523 "rdma_cm_event_timeout_ms": 0, 00:21:30.523 "dhchap_digests": [ 00:21:30.523 "sha256", 00:21:30.523 "sha384", 00:21:30.523 "sha512" 00:21:30.523 ], 00:21:30.523 "dhchap_dhgroups": [ 00:21:30.523 "null", 00:21:30.523 "ffdhe2048", 00:21:30.523 "ffdhe3072", 00:21:30.523 "ffdhe4096", 00:21:30.523 "ffdhe6144", 00:21:30.523 "ffdhe8192" 00:21:30.523 ] 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_nvme_attach_controller", 00:21:30.523 "params": { 00:21:30.523 "name": "nvme0", 00:21:30.523 "trtype": "TCP", 00:21:30.523 "adrfam": "IPv4", 00:21:30.523 "traddr": "10.0.0.2", 00:21:30.523 "trsvcid": "4420", 00:21:30.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.523 "prchk_reftag": false, 00:21:30.523 "prchk_guard": false, 00:21:30.523 "ctrlr_loss_timeout_sec": 0, 00:21:30.523 "reconnect_delay_sec": 0, 00:21:30.523 "fast_io_fail_timeout_sec": 0, 00:21:30.523 "psk": "key0", 00:21:30.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.523 "hdgst": false, 00:21:30.523 "ddgst": false 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_nvme_set_hotplug", 00:21:30.523 "params": { 00:21:30.523 "period_us": 100000, 00:21:30.523 "enable": false 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_enable_histogram", 00:21:30.523 "params": { 00:21:30.523 "name": "nvme0n1", 00:21:30.523 "enable": true 00:21:30.523 } 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "method": "bdev_wait_for_examine" 00:21:30.523 } 00:21:30.523 ] 00:21:30.523 }, 00:21:30.523 { 00:21:30.523 "subsystem": "nbd", 00:21:30.523 "config": [] 00:21:30.523 } 00:21:30.523 ] 00:21:30.523 }' 00:21:30.523 [2024-07-15 13:06:52.216297] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:30.523 [2024-07-15 13:06:52.216351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid732778 ] 00:21:30.523 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.523 [2024-07-15 13:06:52.296297] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.785 [2024-07-15 13:06:52.349983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.785 [2024-07-15 13:06:52.484008] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.358 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.358 13:06:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.358 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.358 13:06:52 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:31.358 13:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.358 13:06:53 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.619 Running I/O for 1 seconds... 00:21:32.562 00:21:32.562 Latency(us) 00:21:32.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.562 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:32.562 Verification LBA range: start 0x0 length 0x2000 00:21:32.562 nvme0n1 : 1.06 3534.31 13.81 0.00 0.00 35343.97 4532.91 53084.16 00:21:32.562 =================================================================================================================== 00:21:32.562 Total : 3534.31 13.81 0.00 0.00 35343.97 4532.91 53084.16 00:21:32.562 0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:32.562 nvmf_trace.0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 732778 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 732778 ']' 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 732778 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.562 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732778 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732778' 00:21:32.824 killing process with pid 732778 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 732778 00:21:32.824 Received shutdown signal, test time was about 1.000000 seconds 00:21:32.824 00:21:32.824 Latency(us) 00:21:32.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.824 =================================================================================================================== 00:21:32.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 732778 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:32.824 rmmod nvme_tcp 00:21:32.824 rmmod nvme_fabrics 00:21:32.824 rmmod nvme_keyring 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 732467 ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 732467 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 732467 ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 732467 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 732467 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 732467' 00:21:32.824 killing process with pid 732467 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 732467 00:21:32.824 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 732467 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.085 13:06:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.636 13:06:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.636 13:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.muXTAcub1Y /tmp/tmp.auoQZugDqY /tmp/tmp.PAxKqfZ8FO 00:21:35.636 00:21:35.636 real 1m25.118s 00:21:35.636 user 2m9.304s 00:21:35.636 sys 0m27.910s 00:21:35.636 13:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.636 13:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.636 ************************************ 00:21:35.636 END TEST nvmf_tls 00:21:35.636 ************************************ 00:21:35.636 13:06:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.636 13:06:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.636 13:06:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.636 13:06:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.636 13:06:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.636 ************************************ 00:21:35.636 START TEST nvmf_fips 00:21:35.636 ************************************ 00:21:35.636 13:06:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:35.636 * Looking for test storage... 00:21:35.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:35.636 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:35.637 Error setting digest 00:21:35.637 0082E132457F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:35.637 0082E132457F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.637 13:06:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.782 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:43.783 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:43.783 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:43.783 Found net devices under 0000:31:00.0: cvl_0_0 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:43.783 Found net devices under 0000:31:00.1: cvl_0_1 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.783 13:07:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.798 ms 00:21:43.783 00:21:43.783 --- 10.0.0.2 ping statistics --- 00:21:43.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.783 rtt min/avg/max/mdev = 0.798/0.798/0.798/0.000 ms 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:21:43.783 00:21:43.783 --- 10.0.0.1 ping statistics --- 00:21:43.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.783 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=737858 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 737858 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 737858 ']' 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.783 13:07:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:43.783 [2024-07-15 13:07:05.359900] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:43.783 [2024-07-15 13:07:05.359950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.783 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.783 [2024-07-15 13:07:05.450637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.783 [2024-07-15 13:07:05.513652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.783 [2024-07-15 13:07:05.513691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.783 [2024-07-15 13:07:05.513699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.783 [2024-07-15 13:07:05.513705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.783 [2024-07-15 13:07:05.513711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.783 [2024-07-15 13:07:05.513731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:44.355 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.616 [2024-07-15 13:07:06.296506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.616 [2024-07-15 13:07:06.312495] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.616 [2024-07-15 13:07:06.312774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.616 [2024-07-15 13:07:06.342644] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:44.616 malloc0 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=738187 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 738187 /var/tmp/bdevperf.sock 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 738187 ']' 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.616 13:07:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:44.876 [2024-07-15 13:07:06.444707] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:21:44.876 [2024-07-15 13:07:06.444780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738187 ] 00:21:44.876 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.876 [2024-07-15 13:07:06.508819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.876 [2024-07-15 13:07:06.571721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.445 13:07:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.445 13:07:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:45.445 13:07:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:45.706 [2024-07-15 13:07:07.339365] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.706 [2024-07-15 13:07:07.339435] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:45.706 TLSTESTn1 00:21:45.706 13:07:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.706 Running I/O for 10 seconds... 00:21:57.945 00:21:57.945 Latency(us) 00:21:57.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.945 Verification LBA range: start 0x0 length 0x2000 00:21:57.945 TLSTESTn1 : 10.08 3711.99 14.50 0.00 0.00 34350.55 6034.77 86070.61 00:21:57.945 =================================================================================================================== 00:21:57.945 Total : 3711.99 14.50 0.00 0.00 34350.55 6034.77 86070.61 00:21:57.945 0 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:57.945 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:57.946 nvmf_trace.0 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 738187 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 738187 ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 738187 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738187 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738187' 00:21:57.946 killing process with pid 738187 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 738187 00:21:57.946 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.946 00:21:57.946 Latency(us) 00:21:57.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.946 =================================================================================================================== 00:21:57.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.946 [2024-07-15 13:07:17.782826] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 738187 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.946 rmmod nvme_tcp 00:21:57.946 rmmod nvme_fabrics 00:21:57.946 rmmod nvme_keyring 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 737858 ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 737858 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 737858 ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 737858 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.946 13:07:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 737858 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 737858' 00:21:57.946 killing process with pid 737858 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 737858 00:21:57.946 [2024-07-15 13:07:18.015506] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 737858 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.946 13:07:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.519 13:07:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.519 13:07:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:58.519 00:21:58.519 real 0m23.282s 00:21:58.519 user 0m23.876s 00:21:58.519 sys 0m10.119s 00:21:58.519 13:07:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.519 13:07:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:58.519 ************************************ 00:21:58.519 END TEST nvmf_fips 00:21:58.519 ************************************ 00:21:58.519 13:07:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:58.519 13:07:20 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:58.519 13:07:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:58.519 13:07:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:58.519 13:07:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:58.519 13:07:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.519 13:07:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.666 13:07:28 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:06.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:06.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:06.667 Found net devices under 0000:31:00.0: cvl_0_0 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:06.667 Found net devices under 0000:31:00.1: cvl_0_1 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:06.667 13:07:28 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.667 13:07:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.667 13:07:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.667 13:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.667 ************************************ 00:22:06.667 START TEST nvmf_perf_adq 00:22:06.667 ************************************ 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.667 * Looking for test storage... 00:22:06.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.667 13:07:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:14.824 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:14.824 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:14.824 Found net devices under 0000:31:00.0: cvl_0_0 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:14.824 Found net devices under 0000:31:00.1: cvl_0_1 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:14.824 13:07:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:16.213 13:07:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:18.126 13:07:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:23.513 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:23.513 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.513 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:23.514 Found net devices under 0000:31:00.0: cvl_0_0 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:23.514 Found net devices under 0000:31:00.1: cvl_0_1 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:22:23.514 00:22:23.514 --- 10.0.0.2 ping statistics --- 00:22:23.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.514 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:22:23.514 00:22:23.514 --- 10.0.0.1 ping statistics --- 00:22:23.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.514 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=751116 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 751116 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 751116 ']' 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.514 13:07:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 [2024-07-15 13:07:44.985345] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:22:23.514 [2024-07-15 13:07:44.985419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.514 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.514 [2024-07-15 13:07:45.061787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.514 [2024-07-15 13:07:45.128562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.514 [2024-07-15 13:07:45.128603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.514 [2024-07-15 13:07:45.128612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.514 [2024-07-15 13:07:45.128618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.514 [2024-07-15 13:07:45.128624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.514 [2024-07-15 13:07:45.128763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.514 [2024-07-15 13:07:45.128880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.514 [2024-07-15 13:07:45.129037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.514 [2024-07-15 13:07:45.129037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.514 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 [2024-07-15 13:07:45.340287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 Malloc1 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.776 [2024-07-15 13:07:45.399659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=751142 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:23.776 13:07:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.776 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:25.693 "tick_rate": 2400000000, 00:22:25.693 "poll_groups": [ 00:22:25.693 { 00:22:25.693 "name": "nvmf_tgt_poll_group_000", 00:22:25.693 "admin_qpairs": 1, 00:22:25.693 "io_qpairs": 1, 00:22:25.693 "current_admin_qpairs": 1, 00:22:25.693 "current_io_qpairs": 1, 00:22:25.693 "pending_bdev_io": 0, 00:22:25.693 "completed_nvme_io": 20215, 00:22:25.693 "transports": [ 00:22:25.693 { 00:22:25.693 "trtype": "TCP" 00:22:25.693 } 00:22:25.693 ] 00:22:25.693 }, 00:22:25.693 { 00:22:25.693 "name": "nvmf_tgt_poll_group_001", 00:22:25.693 "admin_qpairs": 0, 00:22:25.693 "io_qpairs": 1, 00:22:25.693 "current_admin_qpairs": 0, 00:22:25.693 "current_io_qpairs": 1, 00:22:25.693 "pending_bdev_io": 0, 00:22:25.693 "completed_nvme_io": 30456, 00:22:25.693 "transports": [ 00:22:25.693 { 00:22:25.693 "trtype": "TCP" 00:22:25.693 } 00:22:25.693 ] 00:22:25.693 }, 00:22:25.693 { 00:22:25.693 "name": "nvmf_tgt_poll_group_002", 00:22:25.693 "admin_qpairs": 0, 00:22:25.693 "io_qpairs": 1, 00:22:25.693 "current_admin_qpairs": 0, 00:22:25.693 "current_io_qpairs": 1, 00:22:25.693 "pending_bdev_io": 0, 00:22:25.693 "completed_nvme_io": 20472, 00:22:25.693 "transports": [ 00:22:25.693 { 00:22:25.693 "trtype": "TCP" 00:22:25.693 } 00:22:25.693 ] 00:22:25.693 }, 00:22:25.693 { 00:22:25.693 "name": "nvmf_tgt_poll_group_003", 00:22:25.693 "admin_qpairs": 0, 00:22:25.693 "io_qpairs": 1, 00:22:25.693 "current_admin_qpairs": 0, 00:22:25.693 "current_io_qpairs": 1, 00:22:25.693 "pending_bdev_io": 0, 00:22:25.693 "completed_nvme_io": 20623, 00:22:25.693 "transports": [ 00:22:25.693 { 00:22:25.693 "trtype": "TCP" 00:22:25.693 } 00:22:25.693 ] 00:22:25.693 } 00:22:25.693 ] 00:22:25.693 }' 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:25.693 13:07:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 751142 00:22:33.833 Initializing NVMe Controllers 00:22:33.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:33.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:33.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:33.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:33.833 Initialization complete. Launching workers. 00:22:33.833 ======================================================== 00:22:33.833 Latency(us) 00:22:33.833 Device Information : IOPS MiB/s Average min max 00:22:33.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13009.10 50.82 4919.95 1275.85 9706.50 00:22:33.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15689.40 61.29 4078.52 1314.17 7937.94 00:22:33.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13270.80 51.84 4822.46 1296.55 10688.85 00:22:33.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11251.70 43.95 5706.22 1230.67 45974.27 00:22:33.833 ======================================================== 00:22:33.833 Total : 53220.99 207.89 4813.82 1230.67 45974.27 00:22:33.833 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.833 rmmod nvme_tcp 00:22:33.833 rmmod nvme_fabrics 00:22:33.833 rmmod nvme_keyring 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 751116 ']' 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 751116 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 751116 ']' 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 751116 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.833 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 751116 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 751116' 00:22:34.095 killing process with pid 751116 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 751116 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 751116 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.095 13:07:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.644 13:07:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.644 13:07:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:36.644 13:07:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:37.584 13:07:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:39.498 13:08:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:44.786 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:44.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:44.786 Found net devices under 0000:31:00.0: cvl_0_0 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:44.786 Found net devices under 0000:31:00.1: cvl_0_1 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.786 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:44.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:22:44.787 00:22:44.787 --- 10.0.0.2 ping statistics --- 00:22:44.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.787 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:22:44.787 00:22:44.787 --- 10.0.0.1 ping statistics --- 00:22:44.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.787 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:44.787 net.core.busy_poll = 1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:44.787 net.core.busy_read = 1 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:44.787 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=755722 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 755722 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 755722 ']' 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.048 13:08:06 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.048 [2024-07-15 13:08:06.873053] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:22:45.048 [2024-07-15 13:08:06.873120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.308 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.308 [2024-07-15 13:08:06.952970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.308 [2024-07-15 13:08:07.027440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.308 [2024-07-15 13:08:07.027480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.308 [2024-07-15 13:08:07.027488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.308 [2024-07-15 13:08:07.027495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.308 [2024-07-15 13:08:07.027500] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.308 [2024-07-15 13:08:07.027638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.308 [2024-07-15 13:08:07.027757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.308 [2024-07-15 13:08:07.027914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.308 [2024-07-15 13:08:07.027915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.880 13:08:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.881 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 [2024-07-15 13:08:07.826570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 Malloc1 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 [2024-07-15 13:08:07.885954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=755965 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:46.168 13:08:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:46.168 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.085 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:48.086 13:08:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.086 13:08:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:48.347 "tick_rate": 2400000000, 00:22:48.347 "poll_groups": [ 00:22:48.347 { 00:22:48.347 "name": "nvmf_tgt_poll_group_000", 00:22:48.347 "admin_qpairs": 1, 00:22:48.347 "io_qpairs": 0, 00:22:48.347 "current_admin_qpairs": 1, 00:22:48.347 "current_io_qpairs": 0, 00:22:48.347 "pending_bdev_io": 0, 00:22:48.347 "completed_nvme_io": 0, 00:22:48.347 "transports": [ 00:22:48.347 { 00:22:48.347 "trtype": "TCP" 00:22:48.347 } 00:22:48.347 ] 00:22:48.347 }, 00:22:48.347 { 00:22:48.347 "name": "nvmf_tgt_poll_group_001", 00:22:48.347 "admin_qpairs": 0, 00:22:48.347 "io_qpairs": 4, 00:22:48.347 "current_admin_qpairs": 0, 00:22:48.347 "current_io_qpairs": 4, 00:22:48.347 "pending_bdev_io": 0, 00:22:48.347 "completed_nvme_io": 51771, 00:22:48.347 "transports": [ 00:22:48.347 { 00:22:48.347 "trtype": "TCP" 00:22:48.347 } 00:22:48.347 ] 00:22:48.347 }, 00:22:48.347 { 00:22:48.347 "name": "nvmf_tgt_poll_group_002", 00:22:48.347 "admin_qpairs": 0, 00:22:48.347 "io_qpairs": 0, 00:22:48.347 "current_admin_qpairs": 0, 00:22:48.347 "current_io_qpairs": 0, 00:22:48.347 "pending_bdev_io": 0, 00:22:48.347 "completed_nvme_io": 0, 00:22:48.347 "transports": [ 00:22:48.347 { 00:22:48.347 "trtype": "TCP" 00:22:48.347 } 00:22:48.347 ] 00:22:48.347 }, 00:22:48.347 { 00:22:48.347 "name": "nvmf_tgt_poll_group_003", 00:22:48.347 "admin_qpairs": 0, 00:22:48.347 "io_qpairs": 0, 00:22:48.347 "current_admin_qpairs": 0, 00:22:48.347 "current_io_qpairs": 0, 00:22:48.347 "pending_bdev_io": 0, 00:22:48.347 "completed_nvme_io": 0, 00:22:48.347 "transports": [ 00:22:48.347 { 00:22:48.347 "trtype": "TCP" 00:22:48.347 } 00:22:48.347 ] 00:22:48.347 } 00:22:48.347 ] 00:22:48.347 }' 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:22:48.347 13:08:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 755965 00:22:56.483 Initializing NVMe Controllers 00:22:56.483 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:56.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:56.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:56.483 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:56.483 Initialization complete. Launching workers. 00:22:56.483 ======================================================== 00:22:56.483 Latency(us) 00:22:56.483 Device Information : IOPS MiB/s Average min max 00:22:56.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5934.30 23.18 10786.30 1219.38 55030.95 00:22:56.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5878.70 22.96 10889.11 1149.70 54839.63 00:22:56.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8565.20 33.46 7487.40 1268.35 54512.94 00:22:56.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7032.30 27.47 9119.25 1360.81 54436.89 00:22:56.483 ======================================================== 00:22:56.483 Total : 27410.50 107.07 9349.82 1149.70 55030.95 00:22:56.483 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.483 rmmod nvme_tcp 00:22:56.483 rmmod nvme_fabrics 00:22:56.483 rmmod nvme_keyring 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 755722 ']' 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 755722 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 755722 ']' 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 755722 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755722 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755722' 00:22:56.483 killing process with pid 755722 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 755722 00:22:56.483 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 755722 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.745 13:08:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.048 13:08:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:00.048 13:08:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:00.048 00:23:00.048 real 0m53.319s 00:23:00.048 user 2m47.542s 00:23:00.048 sys 0m11.337s 00:23:00.048 13:08:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:00.048 13:08:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.048 ************************************ 00:23:00.048 END TEST nvmf_perf_adq 00:23:00.048 ************************************ 00:23:00.048 13:08:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:00.048 13:08:21 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:00.048 13:08:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:00.048 13:08:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.048 13:08:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.048 ************************************ 00:23:00.048 START TEST nvmf_shutdown 00:23:00.048 ************************************ 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:00.048 * Looking for test storage... 00:23:00.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.048 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.049 ************************************ 00:23:00.049 START TEST nvmf_shutdown_tc1 00:23:00.049 ************************************ 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.049 13:08:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:08.194 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:08.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:08.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:08.195 Found net devices under 0000:31:00.0: cvl_0_0 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:08.195 Found net devices under 0000:31:00.1: cvl_0_1 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:08.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:23:08.195 00:23:08.195 --- 10.0.0.2 ping statistics --- 00:23:08.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.195 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:23:08.195 00:23:08.195 --- 10.0.0.1 ping statistics --- 00:23:08.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.195 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=762840 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 762840 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 762840 ']' 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.195 13:08:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:08.195 [2024-07-15 13:08:29.810559] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:08.195 [2024-07-15 13:08:29.810623] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.195 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.195 [2024-07-15 13:08:29.906560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.195 [2024-07-15 13:08:30.000144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.195 [2024-07-15 13:08:30.000195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.195 [2024-07-15 13:08:30.000203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.195 [2024-07-15 13:08:30.000211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.195 [2024-07-15 13:08:30.000217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.195 [2024-07-15 13:08:30.000286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.195 [2024-07-15 13:08:30.000463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.195 [2024-07-15 13:08:30.000833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:08.195 [2024-07-15 13:08:30.002590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.766 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.766 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:08.766 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.766 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.766 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.027 [2024-07-15 13:08:30.622611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.027 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.028 13:08:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.028 Malloc1 00:23:09.028 [2024-07-15 13:08:30.725999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.028 Malloc2 00:23:09.028 Malloc3 00:23:09.028 Malloc4 00:23:09.288 Malloc5 00:23:09.289 Malloc6 00:23:09.289 Malloc7 00:23:09.289 Malloc8 00:23:09.289 Malloc9 00:23:09.289 Malloc10 00:23:09.289 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.289 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:09.289 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:09.289 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=763161 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 763161 /var/tmp/bdevperf.sock 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 763161 ']' 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 [2024-07-15 13:08:31.183873] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:09.550 [2024-07-15 13:08:31.183944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.550 EOF 00:23:09.550 )") 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:09.550 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:09.550 { 00:23:09.550 "params": { 00:23:09.550 "name": "Nvme$subsystem", 00:23:09.550 "trtype": "$TEST_TRANSPORT", 00:23:09.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:09.550 "adrfam": "ipv4", 00:23:09.550 "trsvcid": "$NVMF_PORT", 00:23:09.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:09.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:09.550 "hdgst": ${hdgst:-false}, 00:23:09.550 "ddgst": ${ddgst:-false} 00:23:09.550 }, 00:23:09.550 "method": "bdev_nvme_attach_controller" 00:23:09.550 } 00:23:09.551 EOF 00:23:09.551 )") 00:23:09.551 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:09.551 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:09.551 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:09.551 13:08:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme1", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme2", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme3", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme4", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme5", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme6", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme7", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme8", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme9", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 },{ 00:23:09.551 "params": { 00:23:09.551 "name": "Nvme10", 00:23:09.551 "trtype": "tcp", 00:23:09.551 "traddr": "10.0.0.2", 00:23:09.551 "adrfam": "ipv4", 00:23:09.551 "trsvcid": "4420", 00:23:09.551 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:09.551 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:09.551 "hdgst": false, 00:23:09.551 "ddgst": false 00:23:09.551 }, 00:23:09.551 "method": "bdev_nvme_attach_controller" 00:23:09.551 }' 00:23:09.551 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.551 [2024-07-15 13:08:31.251163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.551 [2024-07-15 13:08:31.316071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.934 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.934 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 763161 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:10.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 763161 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:10.935 13:08:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 762840 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.877 { 00:23:11.877 "params": { 00:23:11.877 "name": "Nvme$subsystem", 00:23:11.877 "trtype": "$TEST_TRANSPORT", 00:23:11.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.877 "adrfam": "ipv4", 00:23:11.877 "trsvcid": "$NVMF_PORT", 00:23:11.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.877 "hdgst": ${hdgst:-false}, 00:23:11.877 "ddgst": ${ddgst:-false} 00:23:11.877 }, 00:23:11.877 "method": "bdev_nvme_attach_controller" 00:23:11.877 } 00:23:11.877 EOF 00:23:11.877 )") 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.877 { 00:23:11.877 "params": { 00:23:11.877 "name": "Nvme$subsystem", 00:23:11.877 "trtype": "$TEST_TRANSPORT", 00:23:11.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.877 "adrfam": "ipv4", 00:23:11.877 "trsvcid": "$NVMF_PORT", 00:23:11.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.877 "hdgst": ${hdgst:-false}, 00:23:11.877 "ddgst": ${ddgst:-false} 00:23:11.877 }, 00:23:11.877 "method": "bdev_nvme_attach_controller" 00:23:11.877 } 00:23:11.877 EOF 00:23:11.877 )") 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.877 { 00:23:11.877 "params": { 00:23:11.877 "name": "Nvme$subsystem", 00:23:11.877 "trtype": "$TEST_TRANSPORT", 00:23:11.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.877 "adrfam": "ipv4", 00:23:11.877 "trsvcid": "$NVMF_PORT", 00:23:11.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.877 "hdgst": ${hdgst:-false}, 00:23:11.877 "ddgst": ${ddgst:-false} 00:23:11.877 }, 00:23:11.877 "method": "bdev_nvme_attach_controller" 00:23:11.877 } 00:23:11.877 EOF 00:23:11.877 )") 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.877 { 00:23:11.877 "params": { 00:23:11.877 "name": "Nvme$subsystem", 00:23:11.877 "trtype": "$TEST_TRANSPORT", 00:23:11.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.877 "adrfam": "ipv4", 00:23:11.877 "trsvcid": "$NVMF_PORT", 00:23:11.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.877 "hdgst": ${hdgst:-false}, 00:23:11.877 "ddgst": ${ddgst:-false} 00:23:11.877 }, 00:23:11.877 "method": "bdev_nvme_attach_controller" 00:23:11.877 } 00:23:11.877 EOF 00:23:11.877 )") 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.877 { 00:23:11.877 "params": { 00:23:11.877 "name": "Nvme$subsystem", 00:23:11.877 "trtype": "$TEST_TRANSPORT", 00:23:11.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.877 "adrfam": "ipv4", 00:23:11.877 "trsvcid": "$NVMF_PORT", 00:23:11.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.877 "hdgst": ${hdgst:-false}, 00:23:11.877 "ddgst": ${ddgst:-false} 00:23:11.877 }, 00:23:11.877 "method": "bdev_nvme_attach_controller" 00:23:11.877 } 00:23:11.877 EOF 00:23:11.877 )") 00:23:11.877 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.878 { 00:23:11.878 "params": { 00:23:11.878 "name": "Nvme$subsystem", 00:23:11.878 "trtype": "$TEST_TRANSPORT", 00:23:11.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.878 "adrfam": "ipv4", 00:23:11.878 "trsvcid": "$NVMF_PORT", 00:23:11.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.878 "hdgst": ${hdgst:-false}, 00:23:11.878 "ddgst": ${ddgst:-false} 00:23:11.878 }, 00:23:11.878 "method": "bdev_nvme_attach_controller" 00:23:11.878 } 00:23:11.878 EOF 00:23:11.878 )") 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.878 [2024-07-15 13:08:33.680497] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.878 [2024-07-15 13:08:33.680551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763781 ] 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.878 { 00:23:11.878 "params": { 00:23:11.878 "name": "Nvme$subsystem", 00:23:11.878 "trtype": "$TEST_TRANSPORT", 00:23:11.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.878 "adrfam": "ipv4", 00:23:11.878 "trsvcid": "$NVMF_PORT", 00:23:11.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.878 "hdgst": ${hdgst:-false}, 00:23:11.878 "ddgst": ${ddgst:-false} 00:23:11.878 }, 00:23:11.878 "method": "bdev_nvme_attach_controller" 00:23:11.878 } 00:23:11.878 EOF 00:23:11.878 )") 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.878 { 00:23:11.878 "params": { 00:23:11.878 "name": "Nvme$subsystem", 00:23:11.878 "trtype": "$TEST_TRANSPORT", 00:23:11.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.878 "adrfam": "ipv4", 00:23:11.878 "trsvcid": "$NVMF_PORT", 00:23:11.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.878 "hdgst": ${hdgst:-false}, 00:23:11.878 "ddgst": ${ddgst:-false} 00:23:11.878 }, 00:23:11.878 "method": "bdev_nvme_attach_controller" 00:23:11.878 } 00:23:11.878 EOF 00:23:11.878 )") 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:11.878 { 00:23:11.878 "params": { 00:23:11.878 "name": "Nvme$subsystem", 00:23:11.878 "trtype": "$TEST_TRANSPORT", 00:23:11.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.878 "adrfam": "ipv4", 00:23:11.878 "trsvcid": "$NVMF_PORT", 00:23:11.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.878 "hdgst": ${hdgst:-false}, 00:23:11.878 "ddgst": ${ddgst:-false} 00:23:11.878 }, 00:23:11.878 "method": "bdev_nvme_attach_controller" 00:23:11.878 } 00:23:11.878 EOF 00:23:11.878 )") 00:23:11.878 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.138 { 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme$subsystem", 00:23:12.138 "trtype": "$TEST_TRANSPORT", 00:23:12.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "$NVMF_PORT", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.138 "hdgst": ${hdgst:-false}, 00:23:12.138 "ddgst": ${ddgst:-false} 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 } 00:23:12.138 EOF 00:23:12.138 )") 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.138 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:12.138 13:08:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme1", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme2", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme3", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme4", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme5", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme6", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme7", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme8", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme9", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 },{ 00:23:12.138 "params": { 00:23:12.138 "name": "Nvme10", 00:23:12.138 "trtype": "tcp", 00:23:12.138 "traddr": "10.0.0.2", 00:23:12.138 "adrfam": "ipv4", 00:23:12.138 "trsvcid": "4420", 00:23:12.138 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:12.138 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:12.138 "hdgst": false, 00:23:12.138 "ddgst": false 00:23:12.138 }, 00:23:12.138 "method": "bdev_nvme_attach_controller" 00:23:12.138 }' 00:23:12.139 [2024-07-15 13:08:33.748259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.139 [2024-07-15 13:08:33.812468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.592 Running I/O for 1 seconds... 00:23:14.528 00:23:14.528 Latency(us) 00:23:14.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.528 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme1n1 : 1.07 239.03 14.94 0.00 0.00 264270.72 20753.07 244667.73 00:23:14.528 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme2n1 : 1.14 224.24 14.02 0.00 0.00 277513.39 17585.49 293601.28 00:23:14.528 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme3n1 : 1.08 236.03 14.75 0.00 0.00 258406.08 13434.88 248162.99 00:23:14.528 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme4n1 : 1.09 240.16 15.01 0.00 0.00 247071.66 8028.16 244667.73 00:23:14.528 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme5n1 : 1.15 278.43 17.40 0.00 0.00 212017.75 13762.56 246415.36 00:23:14.528 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme6n1 : 1.15 221.95 13.87 0.00 0.00 261162.67 16930.13 256901.12 00:23:14.528 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme7n1 : 1.14 224.82 14.05 0.00 0.00 252599.04 32986.45 212336.64 00:23:14.528 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme8n1 : 1.18 270.70 16.92 0.00 0.00 205929.30 14199.47 228939.09 00:23:14.528 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme9n1 : 1.18 271.37 16.96 0.00 0.00 198685.87 17803.95 267386.88 00:23:14.528 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:14.528 Verification LBA range: start 0x0 length 0x400 00:23:14.528 Nvme10n1 : 1.21 264.25 16.52 0.00 0.00 205181.18 12888.75 269134.51 00:23:14.528 =================================================================================================================== 00:23:14.528 Total : 2470.98 154.44 0.00 0.00 235320.06 8028.16 293601.28 00:23:14.786 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:14.786 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:14.786 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.786 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:14.787 rmmod nvme_tcp 00:23:14.787 rmmod nvme_fabrics 00:23:14.787 rmmod nvme_keyring 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 762840 ']' 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 762840 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 762840 ']' 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 762840 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 762840 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 762840' 00:23:14.787 killing process with pid 762840 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 762840 00:23:14.787 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 762840 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.046 13:08:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.584 00:23:17.584 real 0m17.252s 00:23:17.584 user 0m33.400s 00:23:17.584 sys 0m7.229s 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.584 ************************************ 00:23:17.584 END TEST nvmf_shutdown_tc1 00:23:17.584 ************************************ 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.584 ************************************ 00:23:17.584 START TEST nvmf_shutdown_tc2 00:23:17.584 ************************************ 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.584 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:17.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:17.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:17.585 Found net devices under 0000:31:00.0: cvl_0_0 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:17.585 Found net devices under 0000:31:00.1: cvl_0_1 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.585 13:08:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:23:17.585 00:23:17.585 --- 10.0.0.2 ping statistics --- 00:23:17.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.585 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:23:17.585 00:23:17.585 --- 10.0.0.1 ping statistics --- 00:23:17.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.585 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=764961 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 764961 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 764961 ']' 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:17.585 13:08:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:17.585 [2024-07-15 13:08:39.309595] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:17.585 [2024-07-15 13:08:39.309644] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.585 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.585 [2024-07-15 13:08:39.398294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.846 [2024-07-15 13:08:39.453066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.847 [2024-07-15 13:08:39.453099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.847 [2024-07-15 13:08:39.453104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.847 [2024-07-15 13:08:39.453109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.847 [2024-07-15 13:08:39.453113] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.847 [2024-07-15 13:08:39.453212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.847 [2024-07-15 13:08:39.453370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.847 [2024-07-15 13:08:39.453494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.847 [2024-07-15 13:08:39.453496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.417 [2024-07-15 13:08:40.114628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.417 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.417 Malloc1 00:23:18.417 [2024-07-15 13:08:40.213299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.417 Malloc2 00:23:18.678 Malloc3 00:23:18.678 Malloc4 00:23:18.678 Malloc5 00:23:18.678 Malloc6 00:23:18.678 Malloc7 00:23:18.678 Malloc8 00:23:18.940 Malloc9 00:23:18.940 Malloc10 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=765318 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 765318 /var/tmp/bdevperf.sock 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 765318 ']' 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.940 { 00:23:18.940 "params": { 00:23:18.940 "name": "Nvme$subsystem", 00:23:18.940 "trtype": "$TEST_TRANSPORT", 00:23:18.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.940 "adrfam": "ipv4", 00:23:18.940 "trsvcid": "$NVMF_PORT", 00:23:18.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.940 "hdgst": ${hdgst:-false}, 00:23:18.940 "ddgst": ${ddgst:-false} 00:23:18.940 }, 00:23:18.940 "method": "bdev_nvme_attach_controller" 00:23:18.940 } 00:23:18.940 EOF 00:23:18.940 )") 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.940 { 00:23:18.940 "params": { 00:23:18.940 "name": "Nvme$subsystem", 00:23:18.940 "trtype": "$TEST_TRANSPORT", 00:23:18.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.940 "adrfam": "ipv4", 00:23:18.940 "trsvcid": "$NVMF_PORT", 00:23:18.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.940 "hdgst": ${hdgst:-false}, 00:23:18.940 "ddgst": ${ddgst:-false} 00:23:18.940 }, 00:23:18.940 "method": "bdev_nvme_attach_controller" 00:23:18.940 } 00:23:18.940 EOF 00:23:18.940 )") 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.940 { 00:23:18.940 "params": { 00:23:18.940 "name": "Nvme$subsystem", 00:23:18.940 "trtype": "$TEST_TRANSPORT", 00:23:18.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.940 "adrfam": "ipv4", 00:23:18.940 "trsvcid": "$NVMF_PORT", 00:23:18.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.940 "hdgst": ${hdgst:-false}, 00:23:18.940 "ddgst": ${ddgst:-false} 00:23:18.940 }, 00:23:18.940 "method": "bdev_nvme_attach_controller" 00:23:18.940 } 00:23:18.940 EOF 00:23:18.940 )") 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.940 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.940 { 00:23:18.940 "params": { 00:23:18.940 "name": "Nvme$subsystem", 00:23:18.940 "trtype": "$TEST_TRANSPORT", 00:23:18.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.940 "adrfam": "ipv4", 00:23:18.940 "trsvcid": "$NVMF_PORT", 00:23:18.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 [2024-07-15 13:08:40.663197] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:18.941 [2024-07-15 13:08:40.663256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid765318 ] 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:18.941 { 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme$subsystem", 00:23:18.941 "trtype": "$TEST_TRANSPORT", 00:23:18.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "$NVMF_PORT", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.941 "hdgst": ${hdgst:-false}, 00:23:18.941 "ddgst": ${ddgst:-false} 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 } 00:23:18.941 EOF 00:23:18.941 )") 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:18.941 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:18.941 13:08:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme1", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme2", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme3", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme4", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme5", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme6", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme7", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme8", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.941 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:18.941 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:18.941 "hdgst": false, 00:23:18.941 "ddgst": false 00:23:18.941 }, 00:23:18.941 "method": "bdev_nvme_attach_controller" 00:23:18.941 },{ 00:23:18.941 "params": { 00:23:18.941 "name": "Nvme9", 00:23:18.941 "trtype": "tcp", 00:23:18.941 "traddr": "10.0.0.2", 00:23:18.941 "adrfam": "ipv4", 00:23:18.941 "trsvcid": "4420", 00:23:18.942 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:18.942 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:18.942 "hdgst": false, 00:23:18.942 "ddgst": false 00:23:18.942 }, 00:23:18.942 "method": "bdev_nvme_attach_controller" 00:23:18.942 },{ 00:23:18.942 "params": { 00:23:18.942 "name": "Nvme10", 00:23:18.942 "trtype": "tcp", 00:23:18.942 "traddr": "10.0.0.2", 00:23:18.942 "adrfam": "ipv4", 00:23:18.942 "trsvcid": "4420", 00:23:18.942 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:18.942 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:18.942 "hdgst": false, 00:23:18.942 "ddgst": false 00:23:18.942 }, 00:23:18.942 "method": "bdev_nvme_attach_controller" 00:23:18.942 }' 00:23:18.942 [2024-07-15 13:08:40.730107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.203 [2024-07-15 13:08:40.795455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.585 Running I/O for 10 seconds... 00:23:20.585 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.585 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.586 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.846 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.846 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:20.846 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:20.846 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:21.108 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.370 13:08:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 765318 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 765318 ']' 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 765318 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 765318 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 765318' 00:23:21.370 killing process with pid 765318 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 765318 00:23:21.370 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 765318 00:23:21.370 Received shutdown signal, test time was about 0.955000 seconds 00:23:21.370 00:23:21.370 Latency(us) 00:23:21.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.370 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme1n1 : 0.93 207.29 12.96 0.00 0.00 304832.00 19114.67 251658.24 00:23:21.370 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme2n1 : 0.95 270.85 16.93 0.00 0.00 228518.19 19551.57 244667.73 00:23:21.370 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme3n1 : 0.95 268.32 16.77 0.00 0.00 225306.67 8956.59 251658.24 00:23:21.370 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme4n1 : 0.95 274.76 17.17 0.00 0.00 214945.69 4669.44 230686.72 00:23:21.370 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme5n1 : 0.93 205.53 12.85 0.00 0.00 281739.95 35389.44 256901.12 00:23:21.370 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme6n1 : 0.92 215.22 13.45 0.00 0.00 260806.46 3153.92 244667.73 00:23:21.370 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme7n1 : 0.93 206.52 12.91 0.00 0.00 265840.36 14636.37 234181.97 00:23:21.370 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme8n1 : 0.94 272.36 17.02 0.00 0.00 198211.63 16493.23 253405.87 00:23:21.370 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme9n1 : 0.95 269.51 16.84 0.00 0.00 195764.69 17148.59 249910.61 00:23:21.370 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:21.370 Verification LBA range: start 0x0 length 0x400 00:23:21.370 Nvme10n1 : 0.94 204.91 12.81 0.00 0.00 250447.36 19005.44 276125.01 00:23:21.370 =================================================================================================================== 00:23:21.370 Total : 2395.26 149.70 0.00 0.00 238360.76 3153.92 276125.01 00:23:21.630 13:08:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 764961 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.571 rmmod nvme_tcp 00:23:22.571 rmmod nvme_fabrics 00:23:22.571 rmmod nvme_keyring 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 764961 ']' 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 764961 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 764961 ']' 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 764961 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.571 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 764961 00:23:22.832 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.832 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.832 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 764961' 00:23:22.832 killing process with pid 764961 00:23:22.832 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 764961 00:23:22.832 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 764961 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.093 13:08:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.007 00:23:25.007 real 0m7.768s 00:23:25.007 user 0m23.563s 00:23:25.007 sys 0m1.170s 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 ************************************ 00:23:25.007 END TEST nvmf_shutdown_tc2 00:23:25.007 ************************************ 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:25.007 ************************************ 00:23:25.007 START TEST nvmf_shutdown_tc3 00:23:25.007 ************************************ 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:25.007 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:25.008 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:25.008 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:25.008 Found net devices under 0000:31:00.0: cvl_0_0 00:23:25.008 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:25.268 Found net devices under 0000:31:00.1: cvl_0_1 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.268 13:08:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.268 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.269 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.269 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:23:25.530 00:23:25.530 --- 10.0.0.2 ping statistics --- 00:23:25.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.530 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:23:25.530 00:23:25.530 --- 10.0.0.1 ping statistics --- 00:23:25.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.530 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=766580 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 766580 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 766580 ']' 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.530 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:25.530 [2024-07-15 13:08:47.191068] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:25.531 [2024-07-15 13:08:47.191110] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.531 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.531 [2024-07-15 13:08:47.269924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.531 [2024-07-15 13:08:47.324592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.531 [2024-07-15 13:08:47.324625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.531 [2024-07-15 13:08:47.324630] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.531 [2024-07-15 13:08:47.324635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.531 [2024-07-15 13:08:47.324639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.531 [2024-07-15 13:08:47.324757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.531 [2024-07-15 13:08:47.324912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.531 [2024-07-15 13:08:47.325064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.531 [2024-07-15 13:08:47.325067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.473 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.473 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:26.473 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.473 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.473 13:08:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 [2024-07-15 13:08:48.006652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.473 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.473 Malloc1 00:23:26.473 [2024-07-15 13:08:48.101528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.473 Malloc2 00:23:26.473 Malloc3 00:23:26.473 Malloc4 00:23:26.473 Malloc5 00:23:26.473 Malloc6 00:23:26.735 Malloc7 00:23:26.735 Malloc8 00:23:26.735 Malloc9 00:23:26.735 Malloc10 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=766862 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 766862 /var/tmp/bdevperf.sock 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 766862 ']' 00:23:26.735 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 [2024-07-15 13:08:48.542862] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:26.736 [2024-07-15 13:08:48.542914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766862 ] 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.736 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.736 { 00:23:26.736 "params": { 00:23:26.736 "name": "Nvme$subsystem", 00:23:26.736 "trtype": "$TEST_TRANSPORT", 00:23:26.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.736 "adrfam": "ipv4", 00:23:26.736 "trsvcid": "$NVMF_PORT", 00:23:26.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.736 "hdgst": ${hdgst:-false}, 00:23:26.736 "ddgst": ${ddgst:-false} 00:23:26.736 }, 00:23:26.736 "method": "bdev_nvme_attach_controller" 00:23:26.736 } 00:23:26.736 EOF 00:23:26.736 )") 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:26.997 { 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme$subsystem", 00:23:26.997 "trtype": "$TEST_TRANSPORT", 00:23:26.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "$NVMF_PORT", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.997 "hdgst": ${hdgst:-false}, 00:23:26.997 "ddgst": ${ddgst:-false} 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 } 00:23:26.997 EOF 00:23:26.997 )") 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:26.997 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:26.997 13:08:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme1", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme2", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme3", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme4", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme5", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme6", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme7", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme8", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.997 "name": "Nvme9", 00:23:26.997 "trtype": "tcp", 00:23:26.997 "traddr": "10.0.0.2", 00:23:26.997 "adrfam": "ipv4", 00:23:26.997 "trsvcid": "4420", 00:23:26.997 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.997 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.997 "hdgst": false, 00:23:26.997 "ddgst": false 00:23:26.997 }, 00:23:26.997 "method": "bdev_nvme_attach_controller" 00:23:26.997 },{ 00:23:26.997 "params": { 00:23:26.998 "name": "Nvme10", 00:23:26.998 "trtype": "tcp", 00:23:26.998 "traddr": "10.0.0.2", 00:23:26.998 "adrfam": "ipv4", 00:23:26.998 "trsvcid": "4420", 00:23:26.998 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.998 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.998 "hdgst": false, 00:23:26.998 "ddgst": false 00:23:26.998 }, 00:23:26.998 "method": "bdev_nvme_attach_controller" 00:23:26.998 }' 00:23:26.998 [2024-07-15 13:08:48.609713] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.998 [2024-07-15 13:08:48.675601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.910 Running I/O for 10 seconds... 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 766580 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 766580 ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 766580 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 766580 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 766580' 00:23:29.495 killing process with pid 766580 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 766580 00:23:29.495 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 766580 00:23:29.495 [2024-07-15 13:08:51.162802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162868] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.162996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.495 [2024-07-15 13:08:51.163046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.163130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26206f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.164406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26230f0 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.496 [2024-07-15 13:08:51.165505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.165759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2620b90 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.166997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.497 [2024-07-15 13:08:51.167113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.167256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621050 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.498 [2024-07-15 13:08:51.168497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.168533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26214f0 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.169432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2621990 is same with the state(5) to be set 00:23:29.499 [2024-07-15 13:08:51.170531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.499 [2024-07-15 13:08:51.170566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.499 [2024-07-15 13:08:51.170576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.499 [2024-07-15 13:08:51.170586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe6bc0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.170673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0970 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.170763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa45d0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.170844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ffd0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.170925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.170982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.170989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177c10 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.171023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.171031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.171039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.171046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.171054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.171062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.500 [2024-07-15 13:08:51.171069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.500 [2024-07-15 13:08:51.171076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116f000 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.500 [2024-07-15 13:08:51.171651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.171767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26222f0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.172555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.501 [2024-07-15 13:08:51.173154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.501 [2024-07-15 13:08:51.173463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.501 [2024-07-15 13:08:51.173470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.173991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.173998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.502 [2024-07-15 13:08:51.174172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.502 [2024-07-15 13:08:51.174180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.174189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.174196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.174205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.174212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.174221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.174228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.174247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.174301] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa0a40 was disconnected and freed. reset controller. 00:23:29.503 [2024-07-15 13:08:51.177218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:29.503 [2024-07-15 13:08:51.177274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7610 (9): Bad file descriptor 00:23:29.503 [2024-07-15 13:08:51.177977] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.503 [2024-07-15 13:08:51.178030] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.503 [2024-07-15 13:08:51.178077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.503 [2024-07-15 13:08:51.178122] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.503 [2024-07-15 13:08:51.178166] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.503 [2024-07-15 13:08:51.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.503 [2024-07-15 13:08:51.178947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.503 [2024-07-15 13:08:51.178956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.178963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.178972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.178979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.178989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.178996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.179198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.179208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.183684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.183867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26227b0 is same with the state(5) to be set 00:23:29.504 [2024-07-15 13:08:51.190997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.504 [2024-07-15 13:08:51.191217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.504 [2024-07-15 13:08:51.191224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191447] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10fb310 was disconnected and freed. reset controller. 00:23:29.505 [2024-07-15 13:08:51.191963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.191984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.505 [2024-07-15 13:08:51.192800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.505 [2024-07-15 13:08:51.192810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.192987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.192996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.506 [2024-07-15 13:08:51.193312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:29.506 [2024-07-15 13:08:51.193378] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10fdac0 was disconnected and freed. reset controller. 00:23:29.506 [2024-07-15 13:08:51.193425] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.506 [2024-07-15 13:08:51.193818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.506 [2024-07-15 13:08:51.193833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7610 with addr=10.0.0.2, port=4420 00:23:29.506 [2024-07-15 13:08:51.193843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7610 is same with the state(5) to be set 00:23:29.506 [2024-07-15 13:08:51.193883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.193896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.193913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.193928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.193944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.193952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158e20 is same with the state(5) to be set 00:23:29.506 [2024-07-15 13:08:51.193967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe6bc0 (9): Bad file descriptor 00:23:29.506 [2024-07-15 13:08:51.193997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.194007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.194016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.194025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.194033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.194040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.194049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.194057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.194065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc69d0 is same with the state(5) to be set 00:23:29.506 [2024-07-15 13:08:51.194086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.506 [2024-07-15 13:08:51.194094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.506 [2024-07-15 13:08:51.194103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.507 [2024-07-15 13:08:51.194110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.194119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.507 [2024-07-15 13:08:51.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.194135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.507 [2024-07-15 13:08:51.194144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.194153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1142cb0 is same with the state(5) to be set 00:23:29.507 [2024-07-15 13:08:51.194171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0970 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.194188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa45d0 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.194203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ffd0 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.194219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177c10 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.194242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116f000 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.194262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7610 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.196857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:29.507 [2024-07-15 13:08:51.196884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc69d0 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.196993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:29.507 [2024-07-15 13:08:51.197008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1158e20 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.197026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:29.507 [2024-07-15 13:08:51.197034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:29.507 [2024-07-15 13:08:51.197043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:29.507 [2024-07-15 13:08:51.197650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.507 [2024-07-15 13:08:51.198001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.507 [2024-07-15 13:08:51.198016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc69d0 with addr=10.0.0.2, port=4420 00:23:29.507 [2024-07-15 13:08:51.198024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc69d0 is same with the state(5) to be set 00:23:29.507 [2024-07-15 13:08:51.198102] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:29.507 [2024-07-15 13:08:51.198462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.507 [2024-07-15 13:08:51.198502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1158e20 with addr=10.0.0.2, port=4420 00:23:29.507 [2024-07-15 13:08:51.198513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158e20 is same with the state(5) to be set 00:23:29.507 [2024-07-15 13:08:51.198528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc69d0 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.198605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1158e20 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.198617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:29.507 [2024-07-15 13:08:51.198625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:29.507 [2024-07-15 13:08:51.198633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:29.507 [2024-07-15 13:08:51.198679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.507 [2024-07-15 13:08:51.198687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:29.507 [2024-07-15 13:08:51.198699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:29.507 [2024-07-15 13:08:51.198706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:29.507 [2024-07-15 13:08:51.198748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.507 [2024-07-15 13:08:51.203488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142cb0 (9): Bad file descriptor 00:23:29.507 [2024-07-15 13:08:51.203637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.203988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.507 [2024-07-15 13:08:51.203996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.507 [2024-07-15 13:08:51.204005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.508 [2024-07-15 13:08:51.204751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.508 [2024-07-15 13:08:51.204761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.204769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.204777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105c420 is same with the state(5) to be set 00:23:29.509 [2024-07-15 13:08:51.206064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.509 [2024-07-15 13:08:51.206797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.509 [2024-07-15 13:08:51.206807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.206987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.206995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.207206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.207215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7e60 is same with the state(5) to be set 00:23:29.510 [2024-07-15 13:08:51.208487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.510 [2024-07-15 13:08:51.208835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.510 [2024-07-15 13:08:51.208843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.208983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.208991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.511 [2024-07-15 13:08:51.209516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.511 [2024-07-15 13:08:51.209526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.209619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.209627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f9230 is same with the state(5) to be set 00:23:29.512 [2024-07-15 13:08:51.210903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.210918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.210932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.210941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.210952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.210960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.210971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.210980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.210991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.210999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.512 [2024-07-15 13:08:51.211506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.512 [2024-07-15 13:08:51.211514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.211983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.211991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.212001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.212008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.212018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.212025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.212035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.212042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.212051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9ed90 is same with the state(5) to be set 00:23:29.513 [2024-07-15 13:08:51.213324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.513 [2024-07-15 13:08:51.213544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.513 [2024-07-15 13:08:51.213553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.213991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.514 [2024-07-15 13:08:51.214308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.514 [2024-07-15 13:08:51.214317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.214467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.214476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa0220 is same with the state(5) to be set 00:23:29.515 [2024-07-15 13:08:51.215758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.215983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.215991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.515 [2024-07-15 13:08:51.216366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.515 [2024-07-15 13:08:51.216373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.516 [2024-07-15 13:08:51.216893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.516 [2024-07-15 13:08:51.216902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fefa0 is same with the state(5) to be set 00:23:29.516 [2024-07-15 13:08:51.221299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221436] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.516 [2024-07-15 13:08:51.221456] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.516 [2024-07-15 13:08:51.221471] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.516 [2024-07-15 13:08:51.221559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:29.516 [2024-07-15 13:08:51.221583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:29.516 [2024-07-15 13:08:51.222061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.516 [2024-07-15 13:08:51.222079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7610 with addr=10.0.0.2, port=4420 00:23:29.516 [2024-07-15 13:08:51.222088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7610 is same with the state(5) to be set 00:23:29.516 [2024-07-15 13:08:51.222307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.516 [2024-07-15 13:08:51.222322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa45d0 with addr=10.0.0.2, port=4420 00:23:29.516 [2024-07-15 13:08:51.222329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa45d0 is same with the state(5) to be set 00:23:29.517 [2024-07-15 13:08:51.222541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.517 [2024-07-15 13:08:51.222552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116ffd0 with addr=10.0.0.2, port=4420 00:23:29.517 [2024-07-15 13:08:51.222559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ffd0 is same with the state(5) to be set 00:23:29.517 [2024-07-15 13:08:51.222757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.517 [2024-07-15 13:08:51.222767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1177c10 with addr=10.0.0.2, port=4420 00:23:29.517 [2024-07-15 13:08:51.222775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177c10 is same with the state(5) to be set 00:23:29.517 [2024-07-15 13:08:51.224101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.517 [2024-07-15 13:08:51.224802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.517 [2024-07-15 13:08:51.224810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.224991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.224998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.518 [2024-07-15 13:08:51.225234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.518 [2024-07-15 13:08:51.225244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc7c0 is same with the state(5) to be set 00:23:29.518 [2024-07-15 13:08:51.227674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:29.518 [2024-07-15 13:08:51.227699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:29.518 task offset: 20608 on job bdev=Nvme6n1 fails 00:23:29.518 00:23:29.518 Latency(us) 00:23:29.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.518 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme1n1 ended in about 0.78 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme1n1 : 0.78 163.57 10.22 81.78 0.00 257375.86 27197.44 253405.87 00:23:29.518 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme2n1 ended in about 0.78 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme2n1 : 0.78 168.16 10.51 81.53 0.00 246756.41 24576.00 209715.20 00:23:29.518 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme3n1 ended in about 0.79 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme3n1 : 0.79 162.57 10.16 81.28 0.00 246405.12 17913.17 249910.61 00:23:29.518 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme4n1 ended in about 0.79 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme4n1 : 0.79 162.07 10.13 81.03 0.00 240886.33 21517.65 230686.72 00:23:29.518 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme5n1 ended in about 0.79 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme5n1 : 0.79 161.57 10.10 80.79 0.00 235318.33 16711.68 207967.57 00:23:29.518 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme6n1 ended in about 0.75 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme6n1 : 0.75 169.94 10.62 84.97 0.00 216021.26 4014.08 251658.24 00:23:29.518 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme7n1 ended in about 0.77 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme7n1 : 0.77 165.77 10.36 82.89 0.00 216009.10 16930.13 255153.49 00:23:29.518 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme8n1 ended in about 0.80 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme8n1 : 0.80 159.41 9.96 79.70 0.00 219934.15 21408.43 246415.36 00:23:29.518 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme9n1 ended in about 0.77 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme9n1 : 0.77 165.51 10.34 82.75 0.00 203802.38 3986.77 253405.87 00:23:29.518 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:29.518 Job: Nvme10n1 ended in about 0.79 seconds with error 00:23:29.518 Verification LBA range: start 0x0 length 0x400 00:23:29.518 Nvme10n1 : 0.79 80.54 5.03 80.54 0.00 307115.52 15182.51 284863.15 00:23:29.518 =================================================================================================================== 00:23:29.518 Total : 1559.11 97.44 817.28 0.00 236634.16 3986.77 284863.15 00:23:29.518 [2024-07-15 13:08:51.254827] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:29.518 [2024-07-15 13:08:51.254857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:29.518 [2024-07-15 13:08:51.255338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.518 [2024-07-15 13:08:51.255355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe0970 with addr=10.0.0.2, port=4420 00:23:29.518 [2024-07-15 13:08:51.255365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe0970 is same with the state(5) to be set 00:23:29.518 [2024-07-15 13:08:51.255731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.518 [2024-07-15 13:08:51.255742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe6bc0 with addr=10.0.0.2, port=4420 00:23:29.518 [2024-07-15 13:08:51.255749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe6bc0 is same with the state(5) to be set 00:23:29.518 [2024-07-15 13:08:51.256116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.518 [2024-07-15 13:08:51.256127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116f000 with addr=10.0.0.2, port=4420 00:23:29.518 [2024-07-15 13:08:51.256134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116f000 is same with the state(5) to be set 00:23:29.519 [2024-07-15 13:08:51.256147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7610 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.256159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa45d0 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.256168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116ffd0 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.256178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177c10 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.256657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.519 [2024-07-15 13:08:51.256672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc69d0 with addr=10.0.0.2, port=4420 00:23:29.519 [2024-07-15 13:08:51.256680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc69d0 is same with the state(5) to be set 00:23:29.519 [2024-07-15 13:08:51.257044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.519 [2024-07-15 13:08:51.257057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1158e20 with addr=10.0.0.2, port=4420 00:23:29.519 [2024-07-15 13:08:51.257064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158e20 is same with the state(5) to be set 00:23:29.519 [2024-07-15 13:08:51.257415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.519 [2024-07-15 13:08:51.257426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1142cb0 with addr=10.0.0.2, port=4420 00:23:29.519 [2024-07-15 13:08:51.257434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1142cb0 is same with the state(5) to be set 00:23:29.519 [2024-07-15 13:08:51.257443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0970 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.257453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe6bc0 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.257462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116f000 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.257470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.257478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.257487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:29.519 [2024-07-15 13:08:51.257500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.257506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.257513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.519 [2024-07-15 13:08:51.257524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.257531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.257538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:29.519 [2024-07-15 13:08:51.257548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.257557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.257564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:29.519 [2024-07-15 13:08:51.257585] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257596] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257606] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257617] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257627] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257637] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257647] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:29.519 [2024-07-15 13:08:51.257982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.257993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc69d0 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.258023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1158e20 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.258032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1142cb0 (9): Bad file descriptor 00:23:29.519 [2024-07-15 13:08:51.258042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:29.519 [2024-07-15 13:08:51.258219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:29.519 [2024-07-15 13:08:51.258227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:29.519 [2024-07-15 13:08:51.258264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.519 [2024-07-15 13:08:51.258279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.779 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:29.779 13:08:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 766862 00:23:30.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (766862) - No such process 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.718 rmmod nvme_tcp 00:23:30.718 rmmod nvme_fabrics 00:23:30.718 rmmod nvme_keyring 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.718 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.719 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.719 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.719 13:08:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.257 00:23:33.257 real 0m7.779s 00:23:33.257 user 0m19.410s 00:23:33.257 sys 0m1.096s 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:33.257 ************************************ 00:23:33.257 END TEST nvmf_shutdown_tc3 00:23:33.257 ************************************ 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:33.257 00:23:33.257 real 0m33.125s 00:23:33.257 user 1m16.479s 00:23:33.257 sys 0m9.733s 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.257 13:08:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:33.257 ************************************ 00:23:33.257 END TEST nvmf_shutdown 00:23:33.257 ************************************ 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.257 13:08:54 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.257 13:08:54 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.257 13:08:54 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:33.257 13:08:54 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.257 13:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.257 ************************************ 00:23:33.257 START TEST nvmf_multicontroller 00:23:33.257 ************************************ 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:33.257 * Looking for test storage... 00:23:33.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.257 13:08:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.258 13:08:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.390 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.390 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:41.391 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:41.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:41.391 Found net devices under 0000:31:00.0: cvl_0_0 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:41.391 Found net devices under 0000:31:00.1: cvl_0_1 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.391 13:09:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:23:41.391 00:23:41.391 --- 10.0.0.2 ping statistics --- 00:23:41.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.391 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:41.391 00:23:41.391 --- 10.0.0.1 ping statistics --- 00:23:41.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.391 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=772491 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 772491 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 772491 ']' 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.391 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.392 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.392 13:09:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 [2024-07-15 13:09:03.238203] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:41.652 [2024-07-15 13:09:03.238281] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.652 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.652 [2024-07-15 13:09:03.334699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.652 [2024-07-15 13:09:03.427593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.652 [2024-07-15 13:09:03.427652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.652 [2024-07-15 13:09:03.427661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.652 [2024-07-15 13:09:03.427669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.652 [2024-07-15 13:09:03.427675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.652 [2024-07-15 13:09:03.427817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.652 [2024-07-15 13:09:03.427982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.652 [2024-07-15 13:09:03.427983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.222 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.222 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:42.222 13:09:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.222 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.222 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 [2024-07-15 13:09:04.070007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 Malloc0 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 [2024-07-15 13:09:04.134604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 [2024-07-15 13:09:04.146570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 Malloc1 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=772727 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 772727 /var/tmp/bdevperf.sock 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 772727 ']' 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.483 13:09:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.427 NVMe0n1 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.427 1 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.427 request: 00:23:43.427 { 00:23:43.427 "name": "NVMe0", 00:23:43.427 "trtype": "tcp", 00:23:43.427 "traddr": "10.0.0.2", 00:23:43.427 "adrfam": "ipv4", 00:23:43.427 "trsvcid": "4420", 00:23:43.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.427 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:43.427 "hostaddr": "10.0.0.2", 00:23:43.427 "hostsvcid": "60000", 00:23:43.427 "prchk_reftag": false, 00:23:43.427 "prchk_guard": false, 00:23:43.427 "hdgst": false, 00:23:43.427 "ddgst": false, 00:23:43.427 "method": "bdev_nvme_attach_controller", 00:23:43.427 "req_id": 1 00:23:43.427 } 00:23:43.427 Got JSON-RPC error response 00:23:43.427 response: 00:23:43.427 { 00:23:43.427 "code": -114, 00:23:43.427 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:43.427 } 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.427 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.427 request: 00:23:43.427 { 00:23:43.427 "name": "NVMe0", 00:23:43.427 "trtype": "tcp", 00:23:43.427 "traddr": "10.0.0.2", 00:23:43.427 "adrfam": "ipv4", 00:23:43.427 "trsvcid": "4420", 00:23:43.427 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.427 "hostaddr": "10.0.0.2", 00:23:43.427 "hostsvcid": "60000", 00:23:43.427 "prchk_reftag": false, 00:23:43.427 "prchk_guard": false, 00:23:43.427 "hdgst": false, 00:23:43.427 "ddgst": false, 00:23:43.427 "method": "bdev_nvme_attach_controller", 00:23:43.427 "req_id": 1 00:23:43.427 } 00:23:43.428 Got JSON-RPC error response 00:23:43.428 response: 00:23:43.428 { 00:23:43.428 "code": -114, 00:23:43.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:43.428 } 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.428 request: 00:23:43.428 { 00:23:43.428 "name": "NVMe0", 00:23:43.428 "trtype": "tcp", 00:23:43.428 "traddr": "10.0.0.2", 00:23:43.428 "adrfam": "ipv4", 00:23:43.428 "trsvcid": "4420", 00:23:43.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.428 "hostaddr": "10.0.0.2", 00:23:43.428 "hostsvcid": "60000", 00:23:43.428 "prchk_reftag": false, 00:23:43.428 "prchk_guard": false, 00:23:43.428 "hdgst": false, 00:23:43.428 "ddgst": false, 00:23:43.428 "multipath": "disable", 00:23:43.428 "method": "bdev_nvme_attach_controller", 00:23:43.428 "req_id": 1 00:23:43.428 } 00:23:43.428 Got JSON-RPC error response 00:23:43.428 response: 00:23:43.428 { 00:23:43.428 "code": -114, 00:23:43.428 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:43.428 } 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.428 request: 00:23:43.428 { 00:23:43.428 "name": "NVMe0", 00:23:43.428 "trtype": "tcp", 00:23:43.428 "traddr": "10.0.0.2", 00:23:43.428 "adrfam": "ipv4", 00:23:43.428 "trsvcid": "4420", 00:23:43.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.428 "hostaddr": "10.0.0.2", 00:23:43.428 "hostsvcid": "60000", 00:23:43.428 "prchk_reftag": false, 00:23:43.428 "prchk_guard": false, 00:23:43.428 "hdgst": false, 00:23:43.428 "ddgst": false, 00:23:43.428 "multipath": "failover", 00:23:43.428 "method": "bdev_nvme_attach_controller", 00:23:43.428 "req_id": 1 00:23:43.428 } 00:23:43.428 Got JSON-RPC error response 00:23:43.428 response: 00:23:43.428 { 00:23:43.428 "code": -114, 00:23:43.428 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:43.428 } 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:43.428 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:43.689 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.690 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.690 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.951 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:43.951 13:09:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.336 0 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 772727 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 772727 ']' 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 772727 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772727 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772727' 00:23:45.336 killing process with pid 772727 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 772727 00:23:45.336 13:09:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 772727 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:45.336 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.336 [2024-07-15 13:09:04.265465] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:45.336 [2024-07-15 13:09:04.265521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772727 ] 00:23:45.336 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.336 [2024-07-15 13:09:04.331483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.336 [2024-07-15 13:09:04.395862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.336 [2024-07-15 13:09:05.685910] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 460dc675-926f-42c7-a6ea-e9f46c40ffba already exists 00:23:45.336 [2024-07-15 13:09:05.685941] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:460dc675-926f-42c7-a6ea-e9f46c40ffba alias for bdev NVMe1n1 00:23:45.336 [2024-07-15 13:09:05.685949] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:45.336 Running I/O for 1 seconds... 00:23:45.336 00:23:45.336 Latency(us) 00:23:45.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.336 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:45.336 NVMe0n1 : 1.01 19483.66 76.11 0.00 0.00 6544.97 6116.69 13325.65 00:23:45.336 =================================================================================================================== 00:23:45.336 Total : 19483.66 76.11 0.00 0.00 6544.97 6116.69 13325.65 00:23:45.336 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.336 00:23:45.336 Latency(us) 00:23:45.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.336 =================================================================================================================== 00:23:45.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.336 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.336 rmmod nvme_tcp 00:23:45.336 rmmod nvme_fabrics 00:23:45.336 rmmod nvme_keyring 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 772491 ']' 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 772491 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 772491 ']' 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 772491 00:23:45.336 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:45.337 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.337 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772491 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772491' 00:23:45.598 killing process with pid 772491 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 772491 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 772491 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.598 13:09:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.178 13:09:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.178 00:23:48.178 real 0m14.634s 00:23:48.178 user 0m17.162s 00:23:48.178 sys 0m6.866s 00:23:48.178 13:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.178 13:09:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.178 ************************************ 00:23:48.178 END TEST nvmf_multicontroller 00:23:48.178 ************************************ 00:23:48.178 13:09:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:48.178 13:09:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:48.178 13:09:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:48.178 13:09:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.178 13:09:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.178 ************************************ 00:23:48.178 START TEST nvmf_aer 00:23:48.178 ************************************ 00:23:48.178 13:09:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:48.178 * Looking for test storage... 00:23:48.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.179 13:09:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.331 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.331 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:56.332 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:56.332 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:56.332 Found net devices under 0000:31:00.0: cvl_0_0 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:56.332 Found net devices under 0000:31:00.1: cvl_0_1 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:23:56.332 00:23:56.332 --- 10.0.0.2 ping statistics --- 00:23:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.332 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:23:56.332 00:23:56.332 --- 10.0.0.1 ping statistics --- 00:23:56.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.332 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=778416 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 778416 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 778416 ']' 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.332 13:09:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.332 [2024-07-15 13:09:17.796261] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:23:56.332 [2024-07-15 13:09:17.796329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.332 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.332 [2024-07-15 13:09:17.877330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.332 [2024-07-15 13:09:17.952509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.332 [2024-07-15 13:09:17.952549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.332 [2024-07-15 13:09:17.952559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.332 [2024-07-15 13:09:17.952566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.332 [2024-07-15 13:09:17.952571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.332 [2024-07-15 13:09:17.952706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.333 [2024-07-15 13:09:17.952827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.333 [2024-07-15 13:09:17.952982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.333 [2024-07-15 13:09:17.952983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 [2024-07-15 13:09:18.614830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 Malloc0 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 [2024-07-15 13:09:18.674237] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.905 [ 00:23:56.905 { 00:23:56.905 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:56.905 "subtype": "Discovery", 00:23:56.905 "listen_addresses": [], 00:23:56.905 "allow_any_host": true, 00:23:56.905 "hosts": [] 00:23:56.905 }, 00:23:56.905 { 00:23:56.905 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.905 "subtype": "NVMe", 00:23:56.905 "listen_addresses": [ 00:23:56.905 { 00:23:56.905 "trtype": "TCP", 00:23:56.905 "adrfam": "IPv4", 00:23:56.905 "traddr": "10.0.0.2", 00:23:56.905 "trsvcid": "4420" 00:23:56.905 } 00:23:56.905 ], 00:23:56.905 "allow_any_host": true, 00:23:56.905 "hosts": [], 00:23:56.905 "serial_number": "SPDK00000000000001", 00:23:56.905 "model_number": "SPDK bdev Controller", 00:23:56.905 "max_namespaces": 2, 00:23:56.905 "min_cntlid": 1, 00:23:56.905 "max_cntlid": 65519, 00:23:56.905 "namespaces": [ 00:23:56.905 { 00:23:56.905 "nsid": 1, 00:23:56.905 "bdev_name": "Malloc0", 00:23:56.905 "name": "Malloc0", 00:23:56.905 "nguid": "6BEBA3CF566C4DB8B3AB19AE673491FC", 00:23:56.905 "uuid": "6beba3cf-566c-4db8-b3ab-19ae673491fc" 00:23:56.905 } 00:23:56.905 ] 00:23:56.905 } 00:23:56.905 ] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=778569 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:56.905 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:57.166 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.166 Malloc1 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.166 Asynchronous Event Request test 00:23:57.166 Attaching to 10.0.0.2 00:23:57.166 Attached to 10.0.0.2 00:23:57.166 Registering asynchronous event callbacks... 00:23:57.166 Starting namespace attribute notice tests for all controllers... 00:23:57.166 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:57.166 aer_cb - Changed Namespace 00:23:57.166 Cleaning up... 00:23:57.166 [ 00:23:57.166 { 00:23:57.166 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:57.166 "subtype": "Discovery", 00:23:57.166 "listen_addresses": [], 00:23:57.166 "allow_any_host": true, 00:23:57.166 "hosts": [] 00:23:57.166 }, 00:23:57.166 { 00:23:57.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.166 "subtype": "NVMe", 00:23:57.166 "listen_addresses": [ 00:23:57.166 { 00:23:57.166 "trtype": "TCP", 00:23:57.166 "adrfam": "IPv4", 00:23:57.166 "traddr": "10.0.0.2", 00:23:57.166 "trsvcid": "4420" 00:23:57.166 } 00:23:57.166 ], 00:23:57.166 "allow_any_host": true, 00:23:57.166 "hosts": [], 00:23:57.166 "serial_number": "SPDK00000000000001", 00:23:57.166 "model_number": "SPDK bdev Controller", 00:23:57.166 "max_namespaces": 2, 00:23:57.166 "min_cntlid": 1, 00:23:57.166 "max_cntlid": 65519, 00:23:57.166 "namespaces": [ 00:23:57.166 { 00:23:57.166 "nsid": 1, 00:23:57.166 "bdev_name": "Malloc0", 00:23:57.166 "name": "Malloc0", 00:23:57.166 "nguid": "6BEBA3CF566C4DB8B3AB19AE673491FC", 00:23:57.166 "uuid": "6beba3cf-566c-4db8-b3ab-19ae673491fc" 00:23:57.166 }, 00:23:57.166 { 00:23:57.166 "nsid": 2, 00:23:57.166 "bdev_name": "Malloc1", 00:23:57.166 "name": "Malloc1", 00:23:57.166 "nguid": "87910A7B32E24607A7776E0B6D865BEC", 00:23:57.166 "uuid": "87910a7b-32e2-4607-a777-6e0b6d865bec" 00:23:57.166 } 00:23:57.166 ] 00:23:57.166 } 00:23:57.166 ] 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 778569 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.166 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.427 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.427 13:09:18 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:57.427 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.427 13:09:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.427 rmmod nvme_tcp 00:23:57.427 rmmod nvme_fabrics 00:23:57.427 rmmod nvme_keyring 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 778416 ']' 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 778416 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 778416 ']' 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 778416 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 778416 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 778416' 00:23:57.427 killing process with pid 778416 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 778416 00:23:57.427 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 778416 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.688 13:09:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.601 13:09:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.601 00:23:59.601 real 0m11.869s 00:23:59.601 user 0m7.685s 00:23:59.601 sys 0m6.422s 00:23:59.601 13:09:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.601 13:09:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.601 ************************************ 00:23:59.601 END TEST nvmf_aer 00:23:59.601 ************************************ 00:23:59.601 13:09:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:59.601 13:09:21 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:59.601 13:09:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:59.601 13:09:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.601 13:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.862 ************************************ 00:23:59.862 START TEST nvmf_async_init 00:23:59.862 ************************************ 00:23:59.862 13:09:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:59.863 * Looking for test storage... 00:23:59.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=aa67ad0c20b14ddea9ca16c38029c2d8 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.863 13:09:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:08.001 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:08.002 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:08.002 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:08.002 Found net devices under 0000:31:00.0: cvl_0_0 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:08.002 Found net devices under 0000:31:00.1: cvl_0_1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:24:08.002 00:24:08.002 --- 10.0.0.2 ping statistics --- 00:24:08.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.002 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:24:08.002 00:24:08.002 --- 10.0.0.1 ping statistics --- 00:24:08.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.002 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=783244 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 783244 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 783244 ']' 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.002 13:09:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.002 [2024-07-15 13:09:29.618088] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:08.002 [2024-07-15 13:09:29.618152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.002 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.002 [2024-07-15 13:09:29.699371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.002 [2024-07-15 13:09:29.771545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.002 [2024-07-15 13:09:29.771585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.002 [2024-07-15 13:09:29.771593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.002 [2024-07-15 13:09:29.771600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.002 [2024-07-15 13:09:29.771606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.002 [2024-07-15 13:09:29.771633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 [2024-07-15 13:09:30.446519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 null0 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g aa67ad0c20b14ddea9ca16c38029c2d8 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.946 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.947 [2024-07-15 13:09:30.502771] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.947 nvme0n1 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.947 [ 00:24:08.947 { 00:24:08.947 "name": "nvme0n1", 00:24:08.947 "aliases": [ 00:24:08.947 "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8" 00:24:08.947 ], 00:24:08.947 "product_name": "NVMe disk", 00:24:08.947 "block_size": 512, 00:24:08.947 "num_blocks": 2097152, 00:24:08.947 "uuid": "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8", 00:24:08.947 "assigned_rate_limits": { 00:24:08.947 "rw_ios_per_sec": 0, 00:24:08.947 "rw_mbytes_per_sec": 0, 00:24:08.947 "r_mbytes_per_sec": 0, 00:24:08.947 "w_mbytes_per_sec": 0 00:24:08.947 }, 00:24:08.947 "claimed": false, 00:24:08.947 "zoned": false, 00:24:08.947 "supported_io_types": { 00:24:08.947 "read": true, 00:24:08.947 "write": true, 00:24:08.947 "unmap": false, 00:24:08.947 "flush": true, 00:24:08.947 "reset": true, 00:24:08.947 "nvme_admin": true, 00:24:08.947 "nvme_io": true, 00:24:08.947 "nvme_io_md": false, 00:24:08.947 "write_zeroes": true, 00:24:08.947 "zcopy": false, 00:24:08.947 "get_zone_info": false, 00:24:08.947 "zone_management": false, 00:24:08.947 "zone_append": false, 00:24:08.947 "compare": true, 00:24:08.947 "compare_and_write": true, 00:24:08.947 "abort": true, 00:24:08.947 "seek_hole": false, 00:24:08.947 "seek_data": false, 00:24:08.947 "copy": true, 00:24:08.947 "nvme_iov_md": false 00:24:08.947 }, 00:24:08.947 "memory_domains": [ 00:24:08.947 { 00:24:08.947 "dma_device_id": "system", 00:24:08.947 "dma_device_type": 1 00:24:08.947 } 00:24:08.947 ], 00:24:08.947 "driver_specific": { 00:24:08.947 "nvme": [ 00:24:08.947 { 00:24:08.947 "trid": { 00:24:08.947 "trtype": "TCP", 00:24:08.947 "adrfam": "IPv4", 00:24:08.947 "traddr": "10.0.0.2", 00:24:08.947 "trsvcid": "4420", 00:24:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.947 }, 00:24:08.947 "ctrlr_data": { 00:24:08.947 "cntlid": 1, 00:24:08.947 "vendor_id": "0x8086", 00:24:08.947 "model_number": "SPDK bdev Controller", 00:24:08.947 "serial_number": "00000000000000000000", 00:24:08.947 "firmware_revision": "24.09", 00:24:08.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.947 "oacs": { 00:24:08.947 "security": 0, 00:24:08.947 "format": 0, 00:24:08.947 "firmware": 0, 00:24:08.947 "ns_manage": 0 00:24:08.947 }, 00:24:08.947 "multi_ctrlr": true, 00:24:08.947 "ana_reporting": false 00:24:08.947 }, 00:24:08.947 "vs": { 00:24:08.947 "nvme_version": "1.3" 00:24:08.947 }, 00:24:08.947 "ns_data": { 00:24:08.947 "id": 1, 00:24:08.947 "can_share": true 00:24:08.947 } 00:24:08.947 } 00:24:08.947 ], 00:24:08.947 "mp_policy": "active_passive" 00:24:08.947 } 00:24:08.947 } 00:24:08.947 ] 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.947 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.947 [2024-07-15 13:09:30.771324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:08.947 [2024-07-15 13:09:30.771383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17599f0 (9): Bad file descriptor 00:24:09.207 [2024-07-15 13:09:30.903354] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:09.207 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.207 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:09.207 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.207 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.207 [ 00:24:09.207 { 00:24:09.207 "name": "nvme0n1", 00:24:09.207 "aliases": [ 00:24:09.207 "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8" 00:24:09.207 ], 00:24:09.207 "product_name": "NVMe disk", 00:24:09.207 "block_size": 512, 00:24:09.207 "num_blocks": 2097152, 00:24:09.207 "uuid": "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8", 00:24:09.207 "assigned_rate_limits": { 00:24:09.207 "rw_ios_per_sec": 0, 00:24:09.207 "rw_mbytes_per_sec": 0, 00:24:09.207 "r_mbytes_per_sec": 0, 00:24:09.207 "w_mbytes_per_sec": 0 00:24:09.207 }, 00:24:09.207 "claimed": false, 00:24:09.207 "zoned": false, 00:24:09.207 "supported_io_types": { 00:24:09.207 "read": true, 00:24:09.207 "write": true, 00:24:09.207 "unmap": false, 00:24:09.207 "flush": true, 00:24:09.207 "reset": true, 00:24:09.207 "nvme_admin": true, 00:24:09.207 "nvme_io": true, 00:24:09.207 "nvme_io_md": false, 00:24:09.207 "write_zeroes": true, 00:24:09.207 "zcopy": false, 00:24:09.207 "get_zone_info": false, 00:24:09.207 "zone_management": false, 00:24:09.207 "zone_append": false, 00:24:09.207 "compare": true, 00:24:09.207 "compare_and_write": true, 00:24:09.207 "abort": true, 00:24:09.207 "seek_hole": false, 00:24:09.207 "seek_data": false, 00:24:09.207 "copy": true, 00:24:09.207 "nvme_iov_md": false 00:24:09.207 }, 00:24:09.207 "memory_domains": [ 00:24:09.207 { 00:24:09.207 "dma_device_id": "system", 00:24:09.207 "dma_device_type": 1 00:24:09.207 } 00:24:09.207 ], 00:24:09.207 "driver_specific": { 00:24:09.207 "nvme": [ 00:24:09.207 { 00:24:09.207 "trid": { 00:24:09.207 "trtype": "TCP", 00:24:09.207 "adrfam": "IPv4", 00:24:09.207 "traddr": "10.0.0.2", 00:24:09.207 "trsvcid": "4420", 00:24:09.207 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:09.207 }, 00:24:09.207 "ctrlr_data": { 00:24:09.207 "cntlid": 2, 00:24:09.207 "vendor_id": "0x8086", 00:24:09.207 "model_number": "SPDK bdev Controller", 00:24:09.207 "serial_number": "00000000000000000000", 00:24:09.207 "firmware_revision": "24.09", 00:24:09.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.207 "oacs": { 00:24:09.207 "security": 0, 00:24:09.207 "format": 0, 00:24:09.207 "firmware": 0, 00:24:09.207 "ns_manage": 0 00:24:09.207 }, 00:24:09.207 "multi_ctrlr": true, 00:24:09.207 "ana_reporting": false 00:24:09.207 }, 00:24:09.208 "vs": { 00:24:09.208 "nvme_version": "1.3" 00:24:09.208 }, 00:24:09.208 "ns_data": { 00:24:09.208 "id": 1, 00:24:09.208 "can_share": true 00:24:09.208 } 00:24:09.208 } 00:24:09.208 ], 00:24:09.208 "mp_policy": "active_passive" 00:24:09.208 } 00:24:09.208 } 00:24:09.208 ] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.nNAlC4AuiM 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.nNAlC4AuiM 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 [2024-07-15 13:09:30.967948] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.208 [2024-07-15 13:09:30.968058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNAlC4AuiM 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 [2024-07-15 13:09:30.979972] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.nNAlC4AuiM 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 13:09:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 [2024-07-15 13:09:30.992024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.208 [2024-07-15 13:09:30.992061] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:09.467 nvme0n1 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.467 [ 00:24:09.467 { 00:24:09.467 "name": "nvme0n1", 00:24:09.467 "aliases": [ 00:24:09.467 "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8" 00:24:09.467 ], 00:24:09.467 "product_name": "NVMe disk", 00:24:09.467 "block_size": 512, 00:24:09.467 "num_blocks": 2097152, 00:24:09.467 "uuid": "aa67ad0c-20b1-4dde-a9ca-16c38029c2d8", 00:24:09.467 "assigned_rate_limits": { 00:24:09.467 "rw_ios_per_sec": 0, 00:24:09.467 "rw_mbytes_per_sec": 0, 00:24:09.467 "r_mbytes_per_sec": 0, 00:24:09.467 "w_mbytes_per_sec": 0 00:24:09.467 }, 00:24:09.467 "claimed": false, 00:24:09.467 "zoned": false, 00:24:09.467 "supported_io_types": { 00:24:09.467 "read": true, 00:24:09.467 "write": true, 00:24:09.467 "unmap": false, 00:24:09.467 "flush": true, 00:24:09.467 "reset": true, 00:24:09.467 "nvme_admin": true, 00:24:09.467 "nvme_io": true, 00:24:09.467 "nvme_io_md": false, 00:24:09.467 "write_zeroes": true, 00:24:09.467 "zcopy": false, 00:24:09.467 "get_zone_info": false, 00:24:09.467 "zone_management": false, 00:24:09.467 "zone_append": false, 00:24:09.467 "compare": true, 00:24:09.467 "compare_and_write": true, 00:24:09.467 "abort": true, 00:24:09.467 "seek_hole": false, 00:24:09.467 "seek_data": false, 00:24:09.467 "copy": true, 00:24:09.467 "nvme_iov_md": false 00:24:09.467 }, 00:24:09.467 "memory_domains": [ 00:24:09.467 { 00:24:09.467 "dma_device_id": "system", 00:24:09.467 "dma_device_type": 1 00:24:09.467 } 00:24:09.467 ], 00:24:09.467 "driver_specific": { 00:24:09.467 "nvme": [ 00:24:09.467 { 00:24:09.467 "trid": { 00:24:09.467 "trtype": "TCP", 00:24:09.467 "adrfam": "IPv4", 00:24:09.467 "traddr": "10.0.0.2", 00:24:09.467 "trsvcid": "4421", 00:24:09.467 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:09.467 }, 00:24:09.467 "ctrlr_data": { 00:24:09.467 "cntlid": 3, 00:24:09.467 "vendor_id": "0x8086", 00:24:09.467 "model_number": "SPDK bdev Controller", 00:24:09.467 "serial_number": "00000000000000000000", 00:24:09.467 "firmware_revision": "24.09", 00:24:09.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:09.467 "oacs": { 00:24:09.467 "security": 0, 00:24:09.467 "format": 0, 00:24:09.467 "firmware": 0, 00:24:09.467 "ns_manage": 0 00:24:09.467 }, 00:24:09.467 "multi_ctrlr": true, 00:24:09.467 "ana_reporting": false 00:24:09.467 }, 00:24:09.467 "vs": { 00:24:09.467 "nvme_version": "1.3" 00:24:09.467 }, 00:24:09.467 "ns_data": { 00:24:09.467 "id": 1, 00:24:09.467 "can_share": true 00:24:09.467 } 00:24:09.467 } 00:24:09.467 ], 00:24:09.467 "mp_policy": "active_passive" 00:24:09.467 } 00:24:09.467 } 00:24:09.467 ] 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.nNAlC4AuiM 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:09.467 rmmod nvme_tcp 00:24:09.467 rmmod nvme_fabrics 00:24:09.467 rmmod nvme_keyring 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 783244 ']' 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 783244 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 783244 ']' 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 783244 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 783244 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 783244' 00:24:09.467 killing process with pid 783244 00:24:09.467 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 783244 00:24:09.467 [2024-07-15 13:09:31.242860] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:09.468 [2024-07-15 13:09:31.242886] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:09.468 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 783244 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.727 13:09:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.639 13:09:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.639 00:24:11.639 real 0m11.990s 00:24:11.639 user 0m4.232s 00:24:11.639 sys 0m6.215s 00:24:11.639 13:09:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:11.639 13:09:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.639 ************************************ 00:24:11.639 END TEST nvmf_async_init 00:24:11.639 ************************************ 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:11.900 13:09:33 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:11.900 ************************************ 00:24:11.900 START TEST dma 00:24:11.900 ************************************ 00:24:11.900 13:09:33 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.900 * Looking for test storage... 00:24:11.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.900 13:09:33 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.900 13:09:33 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.900 13:09:33 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.900 13:09:33 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.900 13:09:33 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.900 13:09:33 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.900 13:09:33 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.900 13:09:33 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:11.900 13:09:33 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.900 13:09:33 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.900 13:09:33 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:11.900 13:09:33 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:11.900 00:24:11.900 real 0m0.136s 00:24:11.900 user 0m0.069s 00:24:11.900 sys 0m0.076s 00:24:11.900 13:09:33 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:11.900 13:09:33 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:11.900 ************************************ 00:24:11.900 END TEST dma 00:24:11.900 ************************************ 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:11.900 13:09:33 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:11.900 13:09:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.162 ************************************ 00:24:12.162 START TEST nvmf_identify 00:24:12.162 ************************************ 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:12.162 * Looking for test storage... 00:24:12.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.162 13:09:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.318 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.318 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.318 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.318 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:20.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:20.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:20.319 Found net devices under 0000:31:00.0: cvl_0_0 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:20.319 Found net devices under 0000:31:00.1: cvl_0_1 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.741 ms 00:24:20.319 00:24:20.319 --- 10.0.0.2 ping statistics --- 00:24:20.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.319 rtt min/avg/max/mdev = 0.741/0.741/0.741/0.000 ms 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:20.319 00:24:20.319 --- 10.0.0.1 ping statistics --- 00:24:20.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.319 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=788322 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 788322 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 788322 ']' 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.319 13:09:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.319 [2024-07-15 13:09:41.971551] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:20.319 [2024-07-15 13:09:41.971604] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.319 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.319 [2024-07-15 13:09:42.045698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.319 [2024-07-15 13:09:42.112656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.319 [2024-07-15 13:09:42.112696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.319 [2024-07-15 13:09:42.112704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.319 [2024-07-15 13:09:42.112711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.319 [2024-07-15 13:09:42.112717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.319 [2024-07-15 13:09:42.116248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.319 [2024-07-15 13:09:42.116393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.319 [2024-07-15 13:09:42.116539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.319 [2024-07-15 13:09:42.116670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 [2024-07-15 13:09:42.224061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 Malloc0 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 [2024-07-15 13:09:42.321121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.580 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.580 [ 00:24:20.580 { 00:24:20.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:20.580 "subtype": "Discovery", 00:24:20.580 "listen_addresses": [ 00:24:20.580 { 00:24:20.580 "trtype": "TCP", 00:24:20.580 "adrfam": "IPv4", 00:24:20.580 "traddr": "10.0.0.2", 00:24:20.580 "trsvcid": "4420" 00:24:20.580 } 00:24:20.580 ], 00:24:20.580 "allow_any_host": true, 00:24:20.580 "hosts": [] 00:24:20.580 }, 00:24:20.580 { 00:24:20.581 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.581 "subtype": "NVMe", 00:24:20.581 "listen_addresses": [ 00:24:20.581 { 00:24:20.581 "trtype": "TCP", 00:24:20.581 "adrfam": "IPv4", 00:24:20.581 "traddr": "10.0.0.2", 00:24:20.581 "trsvcid": "4420" 00:24:20.581 } 00:24:20.581 ], 00:24:20.581 "allow_any_host": true, 00:24:20.581 "hosts": [], 00:24:20.581 "serial_number": "SPDK00000000000001", 00:24:20.581 "model_number": "SPDK bdev Controller", 00:24:20.581 "max_namespaces": 32, 00:24:20.581 "min_cntlid": 1, 00:24:20.581 "max_cntlid": 65519, 00:24:20.581 "namespaces": [ 00:24:20.581 { 00:24:20.581 "nsid": 1, 00:24:20.581 "bdev_name": "Malloc0", 00:24:20.581 "name": "Malloc0", 00:24:20.581 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:20.581 "eui64": "ABCDEF0123456789", 00:24:20.581 "uuid": "fb6b8501-84bd-41ae-b794-4a0a66501cb8" 00:24:20.581 } 00:24:20.581 ] 00:24:20.581 } 00:24:20.581 ] 00:24:20.581 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.581 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:20.581 [2024-07-15 13:09:42.383993] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:20.581 [2024-07-15 13:09:42.384062] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788390 ] 00:24:20.581 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.843 [2024-07-15 13:09:42.417962] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:20.843 [2024-07-15 13:09:42.418014] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.843 [2024-07-15 13:09:42.418020] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.843 [2024-07-15 13:09:42.418031] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.843 [2024-07-15 13:09:42.418037] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.843 [2024-07-15 13:09:42.418396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:20.843 [2024-07-15 13:09:42.418425] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2181ec0 0 00:24:20.843 [2024-07-15 13:09:42.429240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.843 [2024-07-15 13:09:42.429253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.843 [2024-07-15 13:09:42.429257] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.843 [2024-07-15 13:09:42.429261] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.844 [2024-07-15 13:09:42.429296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.429302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.429306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.429321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.844 [2024-07-15 13:09:42.429336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.437243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.437253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.437260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.437277] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.844 [2024-07-15 13:09:42.437285] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:20.844 [2024-07-15 13:09:42.437290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:20.844 [2024-07-15 13:09:42.437303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.437318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.437332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.437554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.437561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.437564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.437574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:20.844 [2024-07-15 13:09:42.437581] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:20.844 [2024-07-15 13:09:42.437588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.437602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.437612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.437796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.437803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.437806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.437815] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:20.844 [2024-07-15 13:09:42.437823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.437829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.437836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.437843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.437853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.438031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.438038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.438044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438047] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.438052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.438062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.438076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.438086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.438319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.438325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.438329] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.438337] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:20.844 [2024-07-15 13:09:42.438342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.438349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.438454] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:20.844 [2024-07-15 13:09:42.438459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.438467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.438481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.438491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.438655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.438662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.438665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.438674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.844 [2024-07-15 13:09:42.438683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.438697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.438707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.438882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.438891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.438894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.438903] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.844 [2024-07-15 13:09:42.438908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:20.844 [2024-07-15 13:09:42.438915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:20.844 [2024-07-15 13:09:42.438922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.844 [2024-07-15 13:09:42.438931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.438934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.844 [2024-07-15 13:09:42.438941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.844 [2024-07-15 13:09:42.438951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.844 [2024-07-15 13:09:42.439162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.844 [2024-07-15 13:09:42.439169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.844 [2024-07-15 13:09:42.439172] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.439176] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2181ec0): datao=0, datal=4096, cccid=0 00:24:20.844 [2024-07-15 13:09:42.439181] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2204e40) on tqpair(0x2181ec0): expected_datao=0, payload_size=4096 00:24:20.844 [2024-07-15 13:09:42.439185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.439193] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.439197] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.479422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.844 [2024-07-15 13:09:42.479434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.844 [2024-07-15 13:09:42.479437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.844 [2024-07-15 13:09:42.479441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.844 [2024-07-15 13:09:42.479449] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:20.844 [2024-07-15 13:09:42.479457] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:20.844 [2024-07-15 13:09:42.479462] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:20.844 [2024-07-15 13:09:42.479467] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:20.844 [2024-07-15 13:09:42.479471] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:20.844 [2024-07-15 13:09:42.479476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:20.844 [2024-07-15 13:09:42.479485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.845 [2024-07-15 13:09:42.479492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.845 [2024-07-15 13:09:42.479521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.845 [2024-07-15 13:09:42.479682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.479688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.479692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.479703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479710] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.845 [2024-07-15 13:09:42.479723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479726] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.845 [2024-07-15 13:09:42.479741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479745] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.845 [2024-07-15 13:09:42.479760] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.845 [2024-07-15 13:09:42.479778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.845 [2024-07-15 13:09:42.479788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.845 [2024-07-15 13:09:42.479794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.479798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.479805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.845 [2024-07-15 13:09:42.479816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204e40, cid 0, qid 0 00:24:20.845 [2024-07-15 13:09:42.479821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2204fc0, cid 1, qid 0 00:24:20.845 [2024-07-15 13:09:42.479826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205140, cid 2, qid 0 00:24:20.845 [2024-07-15 13:09:42.479830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.845 [2024-07-15 13:09:42.479835] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205440, cid 4, qid 0 00:24:20.845 [2024-07-15 13:09:42.480067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.480074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.480077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.480081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205440) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.480086] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:20.845 [2024-07-15 13:09:42.480092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:20.845 [2024-07-15 13:09:42.480102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.480106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.480112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.845 [2024-07-15 13:09:42.480122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205440, cid 4, qid 0 00:24:20.845 [2024-07-15 13:09:42.484241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.845 [2024-07-15 13:09:42.484250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.845 [2024-07-15 13:09:42.484253] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484257] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2181ec0): datao=0, datal=4096, cccid=4 00:24:20.845 [2024-07-15 13:09:42.484261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2205440) on tqpair(0x2181ec0): expected_datao=0, payload_size=4096 00:24:20.845 [2024-07-15 13:09:42.484265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484272] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484275] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.484287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.484290] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205440) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.484305] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:20.845 [2024-07-15 13:09:42.484329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.484339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.845 [2024-07-15 13:09:42.484346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.484359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.845 [2024-07-15 13:09:42.484373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205440, cid 4, qid 0 00:24:20.845 [2024-07-15 13:09:42.484378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22055c0, cid 5, qid 0 00:24:20.845 [2024-07-15 13:09:42.484597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.845 [2024-07-15 13:09:42.484604] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.845 [2024-07-15 13:09:42.484607] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484613] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2181ec0): datao=0, datal=1024, cccid=4 00:24:20.845 [2024-07-15 13:09:42.484617] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2205440) on tqpair(0x2181ec0): expected_datao=0, payload_size=1024 00:24:20.845 [2024-07-15 13:09:42.484622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484628] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.484643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.484647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.484650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22055c0) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.527241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.527251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.527255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.527258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205440) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.527276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.527280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.527287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.845 [2024-07-15 13:09:42.527302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205440, cid 4, qid 0 00:24:20.845 [2024-07-15 13:09:42.527527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.845 [2024-07-15 13:09:42.527534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.845 [2024-07-15 13:09:42.527538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.527541] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2181ec0): datao=0, datal=3072, cccid=4 00:24:20.845 [2024-07-15 13:09:42.527545] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2205440) on tqpair(0x2181ec0): expected_datao=0, payload_size=3072 00:24:20.845 [2024-07-15 13:09:42.527550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.527576] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.527580] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.568433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.845 [2024-07-15 13:09:42.568443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.845 [2024-07-15 13:09:42.568447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.568450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205440) on tqpair=0x2181ec0 00:24:20.845 [2024-07-15 13:09:42.568460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.568464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2181ec0) 00:24:20.845 [2024-07-15 13:09:42.568470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.845 [2024-07-15 13:09:42.568484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2205440, cid 4, qid 0 00:24:20.845 [2024-07-15 13:09:42.568701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.845 [2024-07-15 13:09:42.568707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.845 [2024-07-15 13:09:42.568711] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.845 [2024-07-15 13:09:42.568714] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2181ec0): datao=0, datal=8, cccid=4 00:24:20.846 [2024-07-15 13:09:42.568721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2205440) on tqpair(0x2181ec0): expected_datao=0, payload_size=8 00:24:20.846 [2024-07-15 13:09:42.568726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.846 [2024-07-15 13:09:42.568732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.846 [2024-07-15 13:09:42.568736] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.846 [2024-07-15 13:09:42.609422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.846 [2024-07-15 13:09:42.609432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.846 [2024-07-15 13:09:42.609436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.846 [2024-07-15 13:09:42.609439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205440) on tqpair=0x2181ec0 00:24:20.846 ===================================================== 00:24:20.846 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:20.846 ===================================================== 00:24:20.846 Controller Capabilities/Features 00:24:20.846 ================================ 00:24:20.846 Vendor ID: 0000 00:24:20.846 Subsystem Vendor ID: 0000 00:24:20.846 Serial Number: .................... 00:24:20.846 Model Number: ........................................ 00:24:20.846 Firmware Version: 24.09 00:24:20.846 Recommended Arb Burst: 0 00:24:20.846 IEEE OUI Identifier: 00 00 00 00:24:20.846 Multi-path I/O 00:24:20.846 May have multiple subsystem ports: No 00:24:20.846 May have multiple controllers: No 00:24:20.846 Associated with SR-IOV VF: No 00:24:20.846 Max Data Transfer Size: 131072 00:24:20.846 Max Number of Namespaces: 0 00:24:20.846 Max Number of I/O Queues: 1024 00:24:20.846 NVMe Specification Version (VS): 1.3 00:24:20.846 NVMe Specification Version (Identify): 1.3 00:24:20.846 Maximum Queue Entries: 128 00:24:20.846 Contiguous Queues Required: Yes 00:24:20.846 Arbitration Mechanisms Supported 00:24:20.846 Weighted Round Robin: Not Supported 00:24:20.846 Vendor Specific: Not Supported 00:24:20.846 Reset Timeout: 15000 ms 00:24:20.846 Doorbell Stride: 4 bytes 00:24:20.846 NVM Subsystem Reset: Not Supported 00:24:20.846 Command Sets Supported 00:24:20.846 NVM Command Set: Supported 00:24:20.846 Boot Partition: Not Supported 00:24:20.846 Memory Page Size Minimum: 4096 bytes 00:24:20.846 Memory Page Size Maximum: 4096 bytes 00:24:20.846 Persistent Memory Region: Not Supported 00:24:20.846 Optional Asynchronous Events Supported 00:24:20.846 Namespace Attribute Notices: Not Supported 00:24:20.846 Firmware Activation Notices: Not Supported 00:24:20.846 ANA Change Notices: Not Supported 00:24:20.846 PLE Aggregate Log Change Notices: Not Supported 00:24:20.846 LBA Status Info Alert Notices: Not Supported 00:24:20.846 EGE Aggregate Log Change Notices: Not Supported 00:24:20.846 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.846 Zone Descriptor Change Notices: Not Supported 00:24:20.846 Discovery Log Change Notices: Supported 00:24:20.846 Controller Attributes 00:24:20.846 128-bit Host Identifier: Not Supported 00:24:20.846 Non-Operational Permissive Mode: Not Supported 00:24:20.846 NVM Sets: Not Supported 00:24:20.846 Read Recovery Levels: Not Supported 00:24:20.846 Endurance Groups: Not Supported 00:24:20.846 Predictable Latency Mode: Not Supported 00:24:20.846 Traffic Based Keep ALive: Not Supported 00:24:20.846 Namespace Granularity: Not Supported 00:24:20.846 SQ Associations: Not Supported 00:24:20.846 UUID List: Not Supported 00:24:20.846 Multi-Domain Subsystem: Not Supported 00:24:20.846 Fixed Capacity Management: Not Supported 00:24:20.846 Variable Capacity Management: Not Supported 00:24:20.846 Delete Endurance Group: Not Supported 00:24:20.846 Delete NVM Set: Not Supported 00:24:20.846 Extended LBA Formats Supported: Not Supported 00:24:20.846 Flexible Data Placement Supported: Not Supported 00:24:20.846 00:24:20.846 Controller Memory Buffer Support 00:24:20.846 ================================ 00:24:20.846 Supported: No 00:24:20.846 00:24:20.846 Persistent Memory Region Support 00:24:20.846 ================================ 00:24:20.846 Supported: No 00:24:20.846 00:24:20.846 Admin Command Set Attributes 00:24:20.846 ============================ 00:24:20.846 Security Send/Receive: Not Supported 00:24:20.846 Format NVM: Not Supported 00:24:20.846 Firmware Activate/Download: Not Supported 00:24:20.846 Namespace Management: Not Supported 00:24:20.846 Device Self-Test: Not Supported 00:24:20.846 Directives: Not Supported 00:24:20.846 NVMe-MI: Not Supported 00:24:20.846 Virtualization Management: Not Supported 00:24:20.846 Doorbell Buffer Config: Not Supported 00:24:20.846 Get LBA Status Capability: Not Supported 00:24:20.846 Command & Feature Lockdown Capability: Not Supported 00:24:20.846 Abort Command Limit: 1 00:24:20.846 Async Event Request Limit: 4 00:24:20.846 Number of Firmware Slots: N/A 00:24:20.846 Firmware Slot 1 Read-Only: N/A 00:24:20.846 Firmware Activation Without Reset: N/A 00:24:20.846 Multiple Update Detection Support: N/A 00:24:20.846 Firmware Update Granularity: No Information Provided 00:24:20.846 Per-Namespace SMART Log: No 00:24:20.846 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.846 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:20.846 Command Effects Log Page: Not Supported 00:24:20.846 Get Log Page Extended Data: Supported 00:24:20.846 Telemetry Log Pages: Not Supported 00:24:20.846 Persistent Event Log Pages: Not Supported 00:24:20.846 Supported Log Pages Log Page: May Support 00:24:20.846 Commands Supported & Effects Log Page: Not Supported 00:24:20.846 Feature Identifiers & Effects Log Page:May Support 00:24:20.846 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.846 Data Area 4 for Telemetry Log: Not Supported 00:24:20.846 Error Log Page Entries Supported: 128 00:24:20.846 Keep Alive: Not Supported 00:24:20.846 00:24:20.846 NVM Command Set Attributes 00:24:20.846 ========================== 00:24:20.846 Submission Queue Entry Size 00:24:20.846 Max: 1 00:24:20.846 Min: 1 00:24:20.846 Completion Queue Entry Size 00:24:20.846 Max: 1 00:24:20.846 Min: 1 00:24:20.846 Number of Namespaces: 0 00:24:20.846 Compare Command: Not Supported 00:24:20.846 Write Uncorrectable Command: Not Supported 00:24:20.846 Dataset Management Command: Not Supported 00:24:20.846 Write Zeroes Command: Not Supported 00:24:20.846 Set Features Save Field: Not Supported 00:24:20.846 Reservations: Not Supported 00:24:20.846 Timestamp: Not Supported 00:24:20.846 Copy: Not Supported 00:24:20.846 Volatile Write Cache: Not Present 00:24:20.846 Atomic Write Unit (Normal): 1 00:24:20.846 Atomic Write Unit (PFail): 1 00:24:20.846 Atomic Compare & Write Unit: 1 00:24:20.846 Fused Compare & Write: Supported 00:24:20.846 Scatter-Gather List 00:24:20.846 SGL Command Set: Supported 00:24:20.846 SGL Keyed: Supported 00:24:20.846 SGL Bit Bucket Descriptor: Not Supported 00:24:20.846 SGL Metadata Pointer: Not Supported 00:24:20.846 Oversized SGL: Not Supported 00:24:20.846 SGL Metadata Address: Not Supported 00:24:20.846 SGL Offset: Supported 00:24:20.846 Transport SGL Data Block: Not Supported 00:24:20.846 Replay Protected Memory Block: Not Supported 00:24:20.846 00:24:20.846 Firmware Slot Information 00:24:20.846 ========================= 00:24:20.846 Active slot: 0 00:24:20.846 00:24:20.846 00:24:20.846 Error Log 00:24:20.846 ========= 00:24:20.846 00:24:20.846 Active Namespaces 00:24:20.846 ================= 00:24:20.846 Discovery Log Page 00:24:20.846 ================== 00:24:20.846 Generation Counter: 2 00:24:20.846 Number of Records: 2 00:24:20.846 Record Format: 0 00:24:20.846 00:24:20.846 Discovery Log Entry 0 00:24:20.846 ---------------------- 00:24:20.846 Transport Type: 3 (TCP) 00:24:20.846 Address Family: 1 (IPv4) 00:24:20.846 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:20.846 Entry Flags: 00:24:20.846 Duplicate Returned Information: 1 00:24:20.846 Explicit Persistent Connection Support for Discovery: 1 00:24:20.846 Transport Requirements: 00:24:20.846 Secure Channel: Not Required 00:24:20.846 Port ID: 0 (0x0000) 00:24:20.846 Controller ID: 65535 (0xffff) 00:24:20.846 Admin Max SQ Size: 128 00:24:20.846 Transport Service Identifier: 4420 00:24:20.846 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:20.846 Transport Address: 10.0.0.2 00:24:20.846 Discovery Log Entry 1 00:24:20.846 ---------------------- 00:24:20.846 Transport Type: 3 (TCP) 00:24:20.846 Address Family: 1 (IPv4) 00:24:20.846 Subsystem Type: 2 (NVM Subsystem) 00:24:20.846 Entry Flags: 00:24:20.846 Duplicate Returned Information: 0 00:24:20.846 Explicit Persistent Connection Support for Discovery: 0 00:24:20.846 Transport Requirements: 00:24:20.846 Secure Channel: Not Required 00:24:20.846 Port ID: 0 (0x0000) 00:24:20.846 Controller ID: 65535 (0xffff) 00:24:20.846 Admin Max SQ Size: 128 00:24:20.846 Transport Service Identifier: 4420 00:24:20.846 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:20.846 Transport Address: 10.0.0.2 [2024-07-15 13:09:42.609523] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:20.846 [2024-07-15 13:09:42.609534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204e40) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.847 [2024-07-15 13:09:42.609546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2204fc0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.847 [2024-07-15 13:09:42.609556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2205140) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.847 [2024-07-15 13:09:42.609565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.847 [2024-07-15 13:09:42.609580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.609599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.609613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.609711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.609717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.609721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.609746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.609759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.609948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.609955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.609958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.609968] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:20.847 [2024-07-15 13:09:42.609973] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:20.847 [2024-07-15 13:09:42.609982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.609990] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.609996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.610006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.610217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.610224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.610227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.610248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.610262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.610272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.610487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.610494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.610497] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.610510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.610524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.610534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.610707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.610713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.610717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.610730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.610743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.610753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.610960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.610966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.610972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.610985] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.610992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.610999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.611008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.611194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.611201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.611204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.611209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.611218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.611222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.611226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2181ec0) 00:24:20.847 [2024-07-15 13:09:42.615241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.847 [2024-07-15 13:09:42.615253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22052c0, cid 3, qid 0 00:24:20.847 [2024-07-15 13:09:42.615439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.847 [2024-07-15 13:09:42.615446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.847 [2024-07-15 13:09:42.615450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.847 [2024-07-15 13:09:42.615453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22052c0) on tqpair=0x2181ec0 00:24:20.847 [2024-07-15 13:09:42.615461] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:20.847 00:24:20.847 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:20.847 [2024-07-15 13:09:42.655809] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:20.847 [2024-07-15 13:09:42.655880] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid788517 ] 00:24:20.847 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.110 [2024-07-15 13:09:42.689834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:21.110 [2024-07-15 13:09:42.689883] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:21.110 [2024-07-15 13:09:42.689888] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:21.110 [2024-07-15 13:09:42.689899] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:21.110 [2024-07-15 13:09:42.689905] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:21.110 [2024-07-15 13:09:42.693257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:21.110 [2024-07-15 13:09:42.693286] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c15ec0 0 00:24:21.110 [2024-07-15 13:09:42.693482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:21.110 [2024-07-15 13:09:42.693489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:21.110 [2024-07-15 13:09:42.693493] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:21.110 [2024-07-15 13:09:42.693496] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:21.110 [2024-07-15 13:09:42.693526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.693531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.693535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.110 [2024-07-15 13:09:42.693547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:21.110 [2024-07-15 13:09:42.693560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.110 [2024-07-15 13:09:42.701244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.110 [2024-07-15 13:09:42.701253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.110 [2024-07-15 13:09:42.701257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.110 [2024-07-15 13:09:42.701272] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:21.110 [2024-07-15 13:09:42.701279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:21.110 [2024-07-15 13:09:42.701284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:21.110 [2024-07-15 13:09:42.701295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.110 [2024-07-15 13:09:42.701310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.110 [2024-07-15 13:09:42.701322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.110 [2024-07-15 13:09:42.701512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.110 [2024-07-15 13:09:42.701519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.110 [2024-07-15 13:09:42.701523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.110 [2024-07-15 13:09:42.701531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:21.110 [2024-07-15 13:09:42.701539] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:21.110 [2024-07-15 13:09:42.701545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701552] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.110 [2024-07-15 13:09:42.701559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.110 [2024-07-15 13:09:42.701569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.110 [2024-07-15 13:09:42.701759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.110 [2024-07-15 13:09:42.701765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.110 [2024-07-15 13:09:42.701768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.110 [2024-07-15 13:09:42.701780] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:21.110 [2024-07-15 13:09:42.701788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:21.110 [2024-07-15 13:09:42.701794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.701801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.110 [2024-07-15 13:09:42.701808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.110 [2024-07-15 13:09:42.701818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.110 [2024-07-15 13:09:42.702034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.110 [2024-07-15 13:09:42.702040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.110 [2024-07-15 13:09:42.702044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.110 [2024-07-15 13:09:42.702048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.702052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:21.111 [2024-07-15 13:09:42.702061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.702075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.111 [2024-07-15 13:09:42.702085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.702304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.702311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.702314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.702322] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:21.111 [2024-07-15 13:09:42.702327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:21.111 [2024-07-15 13:09:42.702334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:21.111 [2024-07-15 13:09:42.702439] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:21.111 [2024-07-15 13:09:42.702443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:21.111 [2024-07-15 13:09:42.702451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.702465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.111 [2024-07-15 13:09:42.702475] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.702641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.702650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.702653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.702661] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:21.111 [2024-07-15 13:09:42.702670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.702684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.111 [2024-07-15 13:09:42.702694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.702902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.702908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.702912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.702920] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:21.111 [2024-07-15 13:09:42.702924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.702932] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:21.111 [2024-07-15 13:09:42.702944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.702952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.702955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.702962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.111 [2024-07-15 13:09:42.702972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.703197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.111 [2024-07-15 13:09:42.703204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.111 [2024-07-15 13:09:42.703207] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703211] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=4096, cccid=0 00:24:21.111 [2024-07-15 13:09:42.703216] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c98e40) on tqpair(0x1c15ec0): expected_datao=0, payload_size=4096 00:24:21.111 [2024-07-15 13:09:42.703220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703301] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.703492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.703495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703499] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.703506] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:21.111 [2024-07-15 13:09:42.703515] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:21.111 [2024-07-15 13:09:42.703519] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:21.111 [2024-07-15 13:09:42.703523] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:21.111 [2024-07-15 13:09:42.703527] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:21.111 [2024-07-15 13:09:42.703532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.703540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.703547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.111 [2024-07-15 13:09:42.703572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.703756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.703763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.703767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.703777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.111 [2024-07-15 13:09:42.703797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703800] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703804] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.111 [2024-07-15 13:09:42.703815] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.111 [2024-07-15 13:09:42.703834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.111 [2024-07-15 13:09:42.703851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.703861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.703869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.703873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.111 [2024-07-15 13:09:42.703879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.111 [2024-07-15 13:09:42.703891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98e40, cid 0, qid 0 00:24:21.111 [2024-07-15 13:09:42.703896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c98fc0, cid 1, qid 0 00:24:21.111 [2024-07-15 13:09:42.703900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99140, cid 2, qid 0 00:24:21.111 [2024-07-15 13:09:42.703905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c992c0, cid 3, qid 0 00:24:21.111 [2024-07-15 13:09:42.703910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.111 [2024-07-15 13:09:42.707239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.111 [2024-07-15 13:09:42.707248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.111 [2024-07-15 13:09:42.707252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.111 [2024-07-15 13:09:42.707256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.111 [2024-07-15 13:09:42.707260] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:21.111 [2024-07-15 13:09:42.707265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.707274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.707280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:21.111 [2024-07-15 13:09:42.707286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.707301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:21.112 [2024-07-15 13:09:42.707312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.112 [2024-07-15 13:09:42.707512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.707519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.707522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.707589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.707599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.707607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.707617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.707627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.112 [2024-07-15 13:09:42.707819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.112 [2024-07-15 13:09:42.707827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.112 [2024-07-15 13:09:42.707831] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707835] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=4096, cccid=4 00:24:21.112 [2024-07-15 13:09:42.707839] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c99440) on tqpair(0x1c15ec0): expected_datao=0, payload_size=4096 00:24:21.112 [2024-07-15 13:09:42.707843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707897] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.707901] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.708091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.708094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.708107] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:21.112 [2024-07-15 13:09:42.708116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.708125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.708132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.708142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.708153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.112 [2024-07-15 13:09:42.708374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.112 [2024-07-15 13:09:42.708381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.112 [2024-07-15 13:09:42.708384] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708388] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=4096, cccid=4 00:24:21.112 [2024-07-15 13:09:42.708392] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c99440) on tqpair(0x1c15ec0): expected_datao=0, payload_size=4096 00:24:21.112 [2024-07-15 13:09:42.708396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708455] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708459] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.708640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.708643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.708659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.708668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.708675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.708685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.708700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.112 [2024-07-15 13:09:42.708916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.112 [2024-07-15 13:09:42.708922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.112 [2024-07-15 13:09:42.708926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708929] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=4096, cccid=4 00:24:21.112 [2024-07-15 13:09:42.708934] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c99440) on tqpair(0x1c15ec0): expected_datao=0, payload_size=4096 00:24:21.112 [2024-07-15 13:09:42.708938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708982] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.708986] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.709166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.709170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.709181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709219] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:21.112 [2024-07-15 13:09:42.709223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:21.112 [2024-07-15 13:09:42.709228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:21.112 [2024-07-15 13:09:42.709247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.709257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.709264] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.709277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.112 [2024-07-15 13:09:42.709290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.112 [2024-07-15 13:09:42.709296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c995c0, cid 5, qid 0 00:24:21.112 [2024-07-15 13:09:42.709482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.709489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.709492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709496] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.709503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.709509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.709512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c995c0) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.709524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.709534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.709544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c995c0, cid 5, qid 0 00:24:21.112 [2024-07-15 13:09:42.709753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.709759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.709762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c995c0) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.709775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c15ec0) 00:24:21.112 [2024-07-15 13:09:42.709785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.112 [2024-07-15 13:09:42.709794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c995c0, cid 5, qid 0 00:24:21.112 [2024-07-15 13:09:42.709979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.112 [2024-07-15 13:09:42.709986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.112 [2024-07-15 13:09:42.709989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.709993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c995c0) on tqpair=0x1c15ec0 00:24:21.112 [2024-07-15 13:09:42.710002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.112 [2024-07-15 13:09:42.710006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c15ec0) 00:24:21.113 [2024-07-15 13:09:42.710012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.113 [2024-07-15 13:09:42.710022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c995c0, cid 5, qid 0 00:24:21.113 [2024-07-15 13:09:42.710204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.113 [2024-07-15 13:09:42.710210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.113 [2024-07-15 13:09:42.710213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c995c0) on tqpair=0x1c15ec0 00:24:21.113 [2024-07-15 13:09:42.710235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c15ec0) 00:24:21.113 [2024-07-15 13:09:42.710246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.113 [2024-07-15 13:09:42.710254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c15ec0) 00:24:21.113 [2024-07-15 13:09:42.710265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.113 [2024-07-15 13:09:42.710273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c15ec0) 00:24:21.113 [2024-07-15 13:09:42.710282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.113 [2024-07-15 13:09:42.710290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c15ec0) 00:24:21.113 [2024-07-15 13:09:42.710300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.113 [2024-07-15 13:09:42.710311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c995c0, cid 5, qid 0 00:24:21.113 [2024-07-15 13:09:42.710316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99440, cid 4, qid 0 00:24:21.113 [2024-07-15 13:09:42.710321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c99740, cid 6, qid 0 00:24:21.113 [2024-07-15 13:09:42.710326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c998c0, cid 7, qid 0 00:24:21.113 [2024-07-15 13:09:42.710550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.113 [2024-07-15 13:09:42.710557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.113 [2024-07-15 13:09:42.710560] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710564] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=8192, cccid=5 00:24:21.113 [2024-07-15 13:09:42.710568] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c995c0) on tqpair(0x1c15ec0): expected_datao=0, payload_size=8192 00:24:21.113 [2024-07-15 13:09:42.710572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710683] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.113 [2024-07-15 13:09:42.710699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.113 [2024-07-15 13:09:42.710702] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710706] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=512, cccid=4 00:24:21.113 [2024-07-15 13:09:42.710710] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c99440) on tqpair(0x1c15ec0): expected_datao=0, payload_size=512 00:24:21.113 [2024-07-15 13:09:42.710714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710721] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710724] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.113 [2024-07-15 13:09:42.710735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.113 [2024-07-15 13:09:42.710739] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710742] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=512, cccid=6 00:24:21.113 [2024-07-15 13:09:42.710746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c99740) on tqpair(0x1c15ec0): expected_datao=0, payload_size=512 00:24:21.113 [2024-07-15 13:09:42.710751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710759] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710762] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:21.113 [2024-07-15 13:09:42.710774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:21.113 [2024-07-15 13:09:42.710777] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710780] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c15ec0): datao=0, datal=4096, cccid=7 00:24:21.113 [2024-07-15 13:09:42.710785] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c998c0) on tqpair(0x1c15ec0): expected_datao=0, payload_size=4096 00:24:21.113 [2024-07-15 13:09:42.710789] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710814] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.710818] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.711020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.113 [2024-07-15 13:09:42.711026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.113 [2024-07-15 13:09:42.711029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.711033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c995c0) on tqpair=0x1c15ec0 00:24:21.113 [2024-07-15 13:09:42.711045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.113 [2024-07-15 13:09:42.711051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.113 [2024-07-15 13:09:42.711054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.711058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99440) on tqpair=0x1c15ec0 00:24:21.113 [2024-07-15 13:09:42.711068] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.113 [2024-07-15 13:09:42.711074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.113 [2024-07-15 13:09:42.711077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.711081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99740) on tqpair=0x1c15ec0 00:24:21.113 [2024-07-15 13:09:42.711088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.113 [2024-07-15 13:09:42.711094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.113 [2024-07-15 13:09:42.711097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.113 [2024-07-15 13:09:42.711101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c998c0) on tqpair=0x1c15ec0 00:24:21.113 ===================================================== 00:24:21.113 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.113 ===================================================== 00:24:21.113 Controller Capabilities/Features 00:24:21.113 ================================ 00:24:21.113 Vendor ID: 8086 00:24:21.113 Subsystem Vendor ID: 8086 00:24:21.113 Serial Number: SPDK00000000000001 00:24:21.113 Model Number: SPDK bdev Controller 00:24:21.113 Firmware Version: 24.09 00:24:21.113 Recommended Arb Burst: 6 00:24:21.113 IEEE OUI Identifier: e4 d2 5c 00:24:21.113 Multi-path I/O 00:24:21.113 May have multiple subsystem ports: Yes 00:24:21.113 May have multiple controllers: Yes 00:24:21.113 Associated with SR-IOV VF: No 00:24:21.113 Max Data Transfer Size: 131072 00:24:21.113 Max Number of Namespaces: 32 00:24:21.113 Max Number of I/O Queues: 127 00:24:21.113 NVMe Specification Version (VS): 1.3 00:24:21.113 NVMe Specification Version (Identify): 1.3 00:24:21.113 Maximum Queue Entries: 128 00:24:21.113 Contiguous Queues Required: Yes 00:24:21.113 Arbitration Mechanisms Supported 00:24:21.113 Weighted Round Robin: Not Supported 00:24:21.113 Vendor Specific: Not Supported 00:24:21.113 Reset Timeout: 15000 ms 00:24:21.113 Doorbell Stride: 4 bytes 00:24:21.113 NVM Subsystem Reset: Not Supported 00:24:21.113 Command Sets Supported 00:24:21.113 NVM Command Set: Supported 00:24:21.113 Boot Partition: Not Supported 00:24:21.113 Memory Page Size Minimum: 4096 bytes 00:24:21.113 Memory Page Size Maximum: 4096 bytes 00:24:21.113 Persistent Memory Region: Not Supported 00:24:21.113 Optional Asynchronous Events Supported 00:24:21.113 Namespace Attribute Notices: Supported 00:24:21.113 Firmware Activation Notices: Not Supported 00:24:21.113 ANA Change Notices: Not Supported 00:24:21.113 PLE Aggregate Log Change Notices: Not Supported 00:24:21.113 LBA Status Info Alert Notices: Not Supported 00:24:21.113 EGE Aggregate Log Change Notices: Not Supported 00:24:21.113 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.113 Zone Descriptor Change Notices: Not Supported 00:24:21.113 Discovery Log Change Notices: Not Supported 00:24:21.113 Controller Attributes 00:24:21.113 128-bit Host Identifier: Supported 00:24:21.113 Non-Operational Permissive Mode: Not Supported 00:24:21.113 NVM Sets: Not Supported 00:24:21.113 Read Recovery Levels: Not Supported 00:24:21.113 Endurance Groups: Not Supported 00:24:21.113 Predictable Latency Mode: Not Supported 00:24:21.113 Traffic Based Keep ALive: Not Supported 00:24:21.113 Namespace Granularity: Not Supported 00:24:21.113 SQ Associations: Not Supported 00:24:21.113 UUID List: Not Supported 00:24:21.113 Multi-Domain Subsystem: Not Supported 00:24:21.113 Fixed Capacity Management: Not Supported 00:24:21.113 Variable Capacity Management: Not Supported 00:24:21.113 Delete Endurance Group: Not Supported 00:24:21.113 Delete NVM Set: Not Supported 00:24:21.113 Extended LBA Formats Supported: Not Supported 00:24:21.113 Flexible Data Placement Supported: Not Supported 00:24:21.113 00:24:21.113 Controller Memory Buffer Support 00:24:21.113 ================================ 00:24:21.113 Supported: No 00:24:21.113 00:24:21.113 Persistent Memory Region Support 00:24:21.113 ================================ 00:24:21.113 Supported: No 00:24:21.113 00:24:21.113 Admin Command Set Attributes 00:24:21.113 ============================ 00:24:21.113 Security Send/Receive: Not Supported 00:24:21.113 Format NVM: Not Supported 00:24:21.113 Firmware Activate/Download: Not Supported 00:24:21.113 Namespace Management: Not Supported 00:24:21.114 Device Self-Test: Not Supported 00:24:21.114 Directives: Not Supported 00:24:21.114 NVMe-MI: Not Supported 00:24:21.114 Virtualization Management: Not Supported 00:24:21.114 Doorbell Buffer Config: Not Supported 00:24:21.114 Get LBA Status Capability: Not Supported 00:24:21.114 Command & Feature Lockdown Capability: Not Supported 00:24:21.114 Abort Command Limit: 4 00:24:21.114 Async Event Request Limit: 4 00:24:21.114 Number of Firmware Slots: N/A 00:24:21.114 Firmware Slot 1 Read-Only: N/A 00:24:21.114 Firmware Activation Without Reset: N/A 00:24:21.114 Multiple Update Detection Support: N/A 00:24:21.114 Firmware Update Granularity: No Information Provided 00:24:21.114 Per-Namespace SMART Log: No 00:24:21.114 Asymmetric Namespace Access Log Page: Not Supported 00:24:21.114 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:21.114 Command Effects Log Page: Supported 00:24:21.114 Get Log Page Extended Data: Supported 00:24:21.114 Telemetry Log Pages: Not Supported 00:24:21.114 Persistent Event Log Pages: Not Supported 00:24:21.114 Supported Log Pages Log Page: May Support 00:24:21.114 Commands Supported & Effects Log Page: Not Supported 00:24:21.114 Feature Identifiers & Effects Log Page:May Support 00:24:21.114 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.114 Data Area 4 for Telemetry Log: Not Supported 00:24:21.114 Error Log Page Entries Supported: 128 00:24:21.114 Keep Alive: Supported 00:24:21.114 Keep Alive Granularity: 10000 ms 00:24:21.114 00:24:21.114 NVM Command Set Attributes 00:24:21.114 ========================== 00:24:21.114 Submission Queue Entry Size 00:24:21.114 Max: 64 00:24:21.114 Min: 64 00:24:21.114 Completion Queue Entry Size 00:24:21.114 Max: 16 00:24:21.114 Min: 16 00:24:21.114 Number of Namespaces: 32 00:24:21.114 Compare Command: Supported 00:24:21.114 Write Uncorrectable Command: Not Supported 00:24:21.114 Dataset Management Command: Supported 00:24:21.114 Write Zeroes Command: Supported 00:24:21.114 Set Features Save Field: Not Supported 00:24:21.114 Reservations: Supported 00:24:21.114 Timestamp: Not Supported 00:24:21.114 Copy: Supported 00:24:21.114 Volatile Write Cache: Present 00:24:21.114 Atomic Write Unit (Normal): 1 00:24:21.114 Atomic Write Unit (PFail): 1 00:24:21.114 Atomic Compare & Write Unit: 1 00:24:21.114 Fused Compare & Write: Supported 00:24:21.114 Scatter-Gather List 00:24:21.114 SGL Command Set: Supported 00:24:21.114 SGL Keyed: Supported 00:24:21.114 SGL Bit Bucket Descriptor: Not Supported 00:24:21.114 SGL Metadata Pointer: Not Supported 00:24:21.114 Oversized SGL: Not Supported 00:24:21.114 SGL Metadata Address: Not Supported 00:24:21.114 SGL Offset: Supported 00:24:21.114 Transport SGL Data Block: Not Supported 00:24:21.114 Replay Protected Memory Block: Not Supported 00:24:21.114 00:24:21.114 Firmware Slot Information 00:24:21.114 ========================= 00:24:21.114 Active slot: 1 00:24:21.114 Slot 1 Firmware Revision: 24.09 00:24:21.114 00:24:21.114 00:24:21.114 Commands Supported and Effects 00:24:21.114 ============================== 00:24:21.114 Admin Commands 00:24:21.114 -------------- 00:24:21.114 Get Log Page (02h): Supported 00:24:21.114 Identify (06h): Supported 00:24:21.114 Abort (08h): Supported 00:24:21.114 Set Features (09h): Supported 00:24:21.114 Get Features (0Ah): Supported 00:24:21.114 Asynchronous Event Request (0Ch): Supported 00:24:21.114 Keep Alive (18h): Supported 00:24:21.114 I/O Commands 00:24:21.114 ------------ 00:24:21.114 Flush (00h): Supported LBA-Change 00:24:21.114 Write (01h): Supported LBA-Change 00:24:21.114 Read (02h): Supported 00:24:21.114 Compare (05h): Supported 00:24:21.114 Write Zeroes (08h): Supported LBA-Change 00:24:21.114 Dataset Management (09h): Supported LBA-Change 00:24:21.114 Copy (19h): Supported LBA-Change 00:24:21.114 00:24:21.114 Error Log 00:24:21.114 ========= 00:24:21.114 00:24:21.114 Arbitration 00:24:21.114 =========== 00:24:21.114 Arbitration Burst: 1 00:24:21.114 00:24:21.114 Power Management 00:24:21.114 ================ 00:24:21.114 Number of Power States: 1 00:24:21.114 Current Power State: Power State #0 00:24:21.114 Power State #0: 00:24:21.114 Max Power: 0.00 W 00:24:21.114 Non-Operational State: Operational 00:24:21.114 Entry Latency: Not Reported 00:24:21.114 Exit Latency: Not Reported 00:24:21.114 Relative Read Throughput: 0 00:24:21.114 Relative Read Latency: 0 00:24:21.114 Relative Write Throughput: 0 00:24:21.114 Relative Write Latency: 0 00:24:21.114 Idle Power: Not Reported 00:24:21.114 Active Power: Not Reported 00:24:21.114 Non-Operational Permissive Mode: Not Supported 00:24:21.114 00:24:21.114 Health Information 00:24:21.114 ================== 00:24:21.114 Critical Warnings: 00:24:21.114 Available Spare Space: OK 00:24:21.114 Temperature: OK 00:24:21.114 Device Reliability: OK 00:24:21.114 Read Only: No 00:24:21.114 Volatile Memory Backup: OK 00:24:21.114 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:21.114 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:21.114 Available Spare: 0% 00:24:21.114 Available Spare Threshold: 0% 00:24:21.114 Life Percentage Used:[2024-07-15 13:09:42.711198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.114 [2024-07-15 13:09:42.711203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c15ec0) 00:24:21.114 [2024-07-15 13:09:42.711210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.114 [2024-07-15 13:09:42.711221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c998c0, cid 7, qid 0 00:24:21.114 [2024-07-15 13:09:42.711425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.114 [2024-07-15 13:09:42.711432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.114 [2024-07-15 13:09:42.711436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.114 [2024-07-15 13:09:42.711440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c998c0) on tqpair=0x1c15ec0 00:24:21.114 [2024-07-15 13:09:42.711470] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:21.114 [2024-07-15 13:09:42.711480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98e40) on tqpair=0x1c15ec0 00:24:21.114 [2024-07-15 13:09:42.711486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.114 [2024-07-15 13:09:42.711491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c98fc0) on tqpair=0x1c15ec0 00:24:21.114 [2024-07-15 13:09:42.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.114 [2024-07-15 13:09:42.711502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c99140) on tqpair=0x1c15ec0 00:24:21.114 [2024-07-15 13:09:42.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.114 [2024-07-15 13:09:42.711512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c992c0) on tqpair=0x1c15ec0 00:24:21.114 [2024-07-15 13:09:42.711516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.115 [2024-07-15 13:09:42.711524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.711528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.711531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c15ec0) 00:24:21.115 [2024-07-15 13:09:42.711538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.115 [2024-07-15 13:09:42.711550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c992c0, cid 3, qid 0 00:24:21.115 [2024-07-15 13:09:42.711748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.115 [2024-07-15 13:09:42.711754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.115 [2024-07-15 13:09:42.711758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.711761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c992c0) on tqpair=0x1c15ec0 00:24:21.115 [2024-07-15 13:09:42.711768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.711772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.711775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c15ec0) 00:24:21.115 [2024-07-15 13:09:42.711782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.115 [2024-07-15 13:09:42.711794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c992c0, cid 3, qid 0 00:24:21.115 [2024-07-15 13:09:42.712014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.115 [2024-07-15 13:09:42.712021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.115 [2024-07-15 13:09:42.712025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.712028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c992c0) on tqpair=0x1c15ec0 00:24:21.115 [2024-07-15 13:09:42.712033] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:21.115 [2024-07-15 13:09:42.712038] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:21.115 [2024-07-15 13:09:42.712047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.712051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.712054] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c15ec0) 00:24:21.115 [2024-07-15 13:09:42.712061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.115 [2024-07-15 13:09:42.712070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c992c0, cid 3, qid 0 00:24:21.115 [2024-07-15 13:09:42.716240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.115 [2024-07-15 13:09:42.716249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.115 [2024-07-15 13:09:42.716252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.716256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c992c0) on tqpair=0x1c15ec0 00:24:21.115 [2024-07-15 13:09:42.716266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.716272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.716276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c15ec0) 00:24:21.115 [2024-07-15 13:09:42.716282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.115 [2024-07-15 13:09:42.716294] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c992c0, cid 3, qid 0 00:24:21.115 [2024-07-15 13:09:42.716468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:21.115 [2024-07-15 13:09:42.716475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:21.115 [2024-07-15 13:09:42.716478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:21.115 [2024-07-15 13:09:42.716482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c992c0) on tqpair=0x1c15ec0 00:24:21.115 [2024-07-15 13:09:42.716489] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:24:21.115 0% 00:24:21.115 Data Units Read: 0 00:24:21.115 Data Units Written: 0 00:24:21.115 Host Read Commands: 0 00:24:21.115 Host Write Commands: 0 00:24:21.115 Controller Busy Time: 0 minutes 00:24:21.115 Power Cycles: 0 00:24:21.115 Power On Hours: 0 hours 00:24:21.115 Unsafe Shutdowns: 0 00:24:21.115 Unrecoverable Media Errors: 0 00:24:21.115 Lifetime Error Log Entries: 0 00:24:21.115 Warning Temperature Time: 0 minutes 00:24:21.115 Critical Temperature Time: 0 minutes 00:24:21.115 00:24:21.115 Number of Queues 00:24:21.115 ================ 00:24:21.115 Number of I/O Submission Queues: 127 00:24:21.115 Number of I/O Completion Queues: 127 00:24:21.115 00:24:21.115 Active Namespaces 00:24:21.115 ================= 00:24:21.115 Namespace ID:1 00:24:21.115 Error Recovery Timeout: Unlimited 00:24:21.115 Command Set Identifier: NVM (00h) 00:24:21.115 Deallocate: Supported 00:24:21.115 Deallocated/Unwritten Error: Not Supported 00:24:21.115 Deallocated Read Value: Unknown 00:24:21.115 Deallocate in Write Zeroes: Not Supported 00:24:21.115 Deallocated Guard Field: 0xFFFF 00:24:21.115 Flush: Supported 00:24:21.115 Reservation: Supported 00:24:21.115 Namespace Sharing Capabilities: Multiple Controllers 00:24:21.115 Size (in LBAs): 131072 (0GiB) 00:24:21.115 Capacity (in LBAs): 131072 (0GiB) 00:24:21.115 Utilization (in LBAs): 131072 (0GiB) 00:24:21.115 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:21.115 EUI64: ABCDEF0123456789 00:24:21.115 UUID: fb6b8501-84bd-41ae-b794-4a0a66501cb8 00:24:21.115 Thin Provisioning: Not Supported 00:24:21.115 Per-NS Atomic Units: Yes 00:24:21.115 Atomic Boundary Size (Normal): 0 00:24:21.115 Atomic Boundary Size (PFail): 0 00:24:21.115 Atomic Boundary Offset: 0 00:24:21.115 Maximum Single Source Range Length: 65535 00:24:21.115 Maximum Copy Length: 65535 00:24:21.115 Maximum Source Range Count: 1 00:24:21.115 NGUID/EUI64 Never Reused: No 00:24:21.115 Namespace Write Protected: No 00:24:21.115 Number of LBA Formats: 1 00:24:21.115 Current LBA Format: LBA Format #00 00:24:21.115 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:21.115 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.115 rmmod nvme_tcp 00:24:21.115 rmmod nvme_fabrics 00:24:21.115 rmmod nvme_keyring 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 788322 ']' 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 788322 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 788322 ']' 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 788322 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 788322 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 788322' 00:24:21.115 killing process with pid 788322 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 788322 00:24:21.115 13:09:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 788322 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.376 13:09:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.299 13:09:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.300 00:24:23.300 real 0m11.351s 00:24:23.300 user 0m5.952s 00:24:23.300 sys 0m6.251s 00:24:23.300 13:09:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.300 13:09:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.300 ************************************ 00:24:23.300 END TEST nvmf_identify 00:24:23.300 ************************************ 00:24:23.561 13:09:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:23.561 13:09:45 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:23.561 13:09:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:23.561 13:09:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.561 13:09:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:23.561 ************************************ 00:24:23.561 START TEST nvmf_perf 00:24:23.561 ************************************ 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:23.561 * Looking for test storage... 00:24:23.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.561 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:23.562 13:09:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:31.710 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:31.710 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:31.710 Found net devices under 0000:31:00.0: cvl_0_0 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:31.710 Found net devices under 0000:31:00.1: cvl_0_1 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.710 13:09:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:31.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:24:31.710 00:24:31.710 --- 10.0.0.2 ping statistics --- 00:24:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.710 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:24:31.710 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:31.710 00:24:31.710 --- 10.0.0.1 ping statistics --- 00:24:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.710 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=793187 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 793187 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 793187 ']' 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.711 13:09:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.711 [2024-07-15 13:09:53.347253] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:31.711 [2024-07-15 13:09:53.347324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.711 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.711 [2024-07-15 13:09:53.428024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.711 [2024-07-15 13:09:53.503126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.711 [2024-07-15 13:09:53.503166] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.711 [2024-07-15 13:09:53.503173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.711 [2024-07-15 13:09:53.503180] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.711 [2024-07-15 13:09:53.503185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.711 [2024-07-15 13:09:53.503298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.711 [2024-07-15 13:09:53.503311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.711 [2024-07-15 13:09:53.503355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.711 [2024-07-15 13:09:53.503355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.652 13:09:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:32.653 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:32.913 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:32.913 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:33.174 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:33.174 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:33.174 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:33.174 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:33.175 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:33.175 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:33.175 13:09:54 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:33.436 [2024-07-15 13:09:55.137633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.436 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.696 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.696 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.696 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.696 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:33.956 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.217 [2024-07-15 13:09:55.816106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.217 13:09:55 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.217 13:09:56 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:34.217 13:09:56 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:34.217 13:09:56 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:34.217 13:09:56 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:35.601 Initializing NVMe Controllers 00:24:35.601 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:35.601 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:35.601 Initialization complete. Launching workers. 00:24:35.601 ======================================================== 00:24:35.601 Latency(us) 00:24:35.601 Device Information : IOPS MiB/s Average min max 00:24:35.601 PCIE (0000:65:00.0) NSID 1 from core 0: 79787.17 311.67 400.52 13.36 5231.94 00:24:35.601 ======================================================== 00:24:35.601 Total : 79787.17 311.67 400.52 13.36 5231.94 00:24:35.601 00:24:35.601 13:09:57 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:35.601 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.984 Initializing NVMe Controllers 00:24:36.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.984 Initialization complete. Launching workers. 00:24:36.984 ======================================================== 00:24:36.984 Latency(us) 00:24:36.984 Device Information : IOPS MiB/s Average min max 00:24:36.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12943.91 145.38 45792.12 00:24:36.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15251.49 7961.76 49881.39 00:24:36.984 ======================================================== 00:24:36.984 Total : 145.00 0.57 13994.26 145.38 49881.39 00:24:36.984 00:24:36.984 13:09:58 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.443 Initializing NVMe Controllers 00:24:38.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:38.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:38.444 Initialization complete. Launching workers. 00:24:38.444 ======================================================== 00:24:38.444 Latency(us) 00:24:38.444 Device Information : IOPS MiB/s Average min max 00:24:38.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10379.70 40.55 3086.14 379.46 10310.21 00:24:38.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3816.43 14.91 8492.58 5182.48 55426.80 00:24:38.444 ======================================================== 00:24:38.444 Total : 14196.13 55.45 4539.58 379.46 55426.80 00:24:38.444 00:24:38.444 13:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:38.444 13:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:38.444 13:10:00 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:38.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.988 Initializing NVMe Controllers 00:24:40.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.988 Initialization complete. Launching workers. 00:24:40.988 ======================================================== 00:24:40.988 Latency(us) 00:24:40.988 Device Information : IOPS MiB/s Average min max 00:24:40.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1233.76 308.44 106454.60 58907.10 140722.82 00:24:40.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.88 150.72 220866.19 71672.02 368720.24 00:24:40.988 ======================================================== 00:24:40.988 Total : 1836.65 459.16 144010.50 58907.10 368720.24 00:24:40.988 00:24:40.988 13:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:40.988 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.988 No valid NVMe controllers or AIO or URING devices found 00:24:40.988 Initializing NVMe Controllers 00:24:40.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.988 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:40.988 Controller IO queue size 128, less than required. 00:24:40.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.989 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:40.989 WARNING: Some requested NVMe devices were skipped 00:24:41.249 13:10:02 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:41.249 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.788 Initializing NVMe Controllers 00:24:43.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.788 Controller IO queue size 128, less than required. 00:24:43.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.788 Controller IO queue size 128, less than required. 00:24:43.788 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.788 Initialization complete. Launching workers. 00:24:43.788 00:24:43.788 ==================== 00:24:43.788 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:43.788 TCP transport: 00:24:43.788 polls: 30131 00:24:43.788 idle_polls: 13345 00:24:43.788 sock_completions: 16786 00:24:43.788 nvme_completions: 5087 00:24:43.788 submitted_requests: 7560 00:24:43.788 queued_requests: 1 00:24:43.788 00:24:43.788 ==================== 00:24:43.788 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:43.788 TCP transport: 00:24:43.788 polls: 29507 00:24:43.788 idle_polls: 13126 00:24:43.788 sock_completions: 16381 00:24:43.788 nvme_completions: 5335 00:24:43.788 submitted_requests: 8068 00:24:43.788 queued_requests: 1 00:24:43.788 ======================================================== 00:24:43.788 Latency(us) 00:24:43.788 Device Information : IOPS MiB/s Average min max 00:24:43.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1271.19 317.80 103397.67 53176.94 167552.27 00:24:43.788 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1333.17 333.29 96570.97 51314.03 149270.06 00:24:43.788 ======================================================== 00:24:43.788 Total : 2604.36 651.09 99903.08 51314.03 167552.27 00:24:43.788 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.788 rmmod nvme_tcp 00:24:43.788 rmmod nvme_fabrics 00:24:43.788 rmmod nvme_keyring 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 793187 ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 793187 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 793187 ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 793187 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 793187 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 793187' 00:24:43.788 killing process with pid 793187 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 793187 00:24:43.788 13:10:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 793187 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.333 13:10:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.247 13:10:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:48.247 00:24:48.247 real 0m24.448s 00:24:48.247 user 0m57.877s 00:24:48.247 sys 0m8.376s 00:24:48.247 13:10:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.247 13:10:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:48.247 ************************************ 00:24:48.247 END TEST nvmf_perf 00:24:48.247 ************************************ 00:24:48.247 13:10:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:48.247 13:10:09 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.247 13:10:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:48.247 13:10:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.247 13:10:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:48.247 ************************************ 00:24:48.247 START TEST nvmf_fio_host 00:24:48.247 ************************************ 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:48.247 * Looking for test storage... 00:24:48.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.247 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:48.248 13:10:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:56.390 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:56.390 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:56.390 Found net devices under 0000:31:00.0: cvl_0_0 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:56.390 Found net devices under 0000:31:00.1: cvl_0_1 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:24:56.390 00:24:56.390 --- 10.0.0.2 ping statistics --- 00:24:56.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.390 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:24:56.390 00:24:56.390 --- 10.0.0.1 ping statistics --- 00:24:56.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.390 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.390 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=800552 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 800552 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 800552 ']' 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.391 13:10:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.391 [2024-07-15 13:10:17.985093] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:24:56.391 [2024-07-15 13:10:17.985157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.391 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.391 [2024-07-15 13:10:18.067751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.391 [2024-07-15 13:10:18.143919] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.391 [2024-07-15 13:10:18.143960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.391 [2024-07-15 13:10:18.143969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.391 [2024-07-15 13:10:18.143976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.391 [2024-07-15 13:10:18.143981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.391 [2024-07-15 13:10:18.144120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.391 [2024-07-15 13:10:18.144250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.391 [2024-07-15 13:10:18.144411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:56.391 [2024-07-15 13:10:18.144535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.963 13:10:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.963 13:10:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:56.963 13:10:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:57.225 [2024-07-15 13:10:18.915232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.225 13:10:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:57.225 13:10:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.225 13:10:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.225 13:10:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:57.486 Malloc1 00:24:57.486 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.747 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:57.747 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.009 [2024-07-15 13:10:19.624736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.009 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.295 13:10:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.558 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:58.558 fio-3.35 00:24:58.558 Starting 1 thread 00:24:58.558 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.102 00:25:01.102 test: (groupid=0, jobs=1): err= 0: pid=801285: Mon Jul 15 13:10:22 2024 00:25:01.102 read: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(81.0MiB/2004msec) 00:25:01.103 slat (usec): min=2, max=279, avg= 2.20, stdev= 2.80 00:25:01.103 clat (usec): min=3568, max=9882, avg=6840.78, stdev=1006.90 00:25:01.103 lat (usec): min=3570, max=9889, avg=6842.98, stdev=1006.91 00:25:01.103 clat percentiles (usec): 00:25:01.103 | 1.00th=[ 4490], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 6128], 00:25:01.103 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7308], 00:25:01.103 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7832], 95.00th=[ 8029], 00:25:01.103 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[ 9503], 00:25:01.103 | 99.99th=[ 9634] 00:25:01.103 bw ( KiB/s): min=38360, max=48336, per=99.86%, avg=41334.00, stdev=4691.05, samples=4 00:25:01.103 iops : min= 9590, max=12084, avg=10333.50, stdev=1172.76, samples=4 00:25:01.103 write: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(81.1MiB/2004msec); 0 zone resets 00:25:01.103 slat (usec): min=2, max=266, avg= 2.30, stdev= 2.05 00:25:01.103 clat (usec): min=2908, max=8093, avg=5489.26, stdev=804.95 00:25:01.103 lat (usec): min=2926, max=8303, avg=5491.56, stdev=804.99 00:25:01.103 clat percentiles (usec): 00:25:01.103 | 1.00th=[ 3621], 5.00th=[ 3884], 10.00th=[ 4047], 20.00th=[ 4883], 00:25:01.103 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5866], 00:25:01.103 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6259], 95.00th=[ 6456], 00:25:01.103 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7701], 99.95th=[ 7767], 00:25:01.103 | 99.99th=[ 8029] 00:25:01.103 bw ( KiB/s): min=38784, max=48976, per=99.92%, avg=41414.00, stdev=5042.01, samples=4 00:25:01.103 iops : min= 9696, max=12244, avg=10353.50, stdev=1260.50, samples=4 00:25:01.103 lat (msec) : 4=4.18%, 10=95.82% 00:25:01.103 cpu : usr=67.85%, sys=29.76%, ctx=46, majf=0, minf=7 00:25:01.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:01.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:01.103 issued rwts: total=20737,20764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:01.103 00:25:01.103 Run status group 0 (all jobs): 00:25:01.103 READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=81.0MiB (84.9MB), run=2004-2004msec 00:25:01.103 WRITE: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=81.1MiB (85.0MB), run=2004-2004msec 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:01.103 13:10:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:01.363 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:01.363 fio-3.35 00:25:01.363 Starting 1 thread 00:25:01.363 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.906 00:25:03.906 test: (groupid=0, jobs=1): err= 0: pid=801898: Mon Jul 15 13:10:25 2024 00:25:03.906 read: IOPS=9139, BW=143MiB/s (150MB/s)(287MiB/2009msec) 00:25:03.906 slat (usec): min=3, max=110, avg= 3.63, stdev= 1.63 00:25:03.906 clat (usec): min=2124, max=15494, avg=8711.62, stdev=2040.35 00:25:03.906 lat (usec): min=2128, max=15501, avg=8715.25, stdev=2040.47 00:25:03.906 clat percentiles (usec): 00:25:03.906 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6849], 00:25:03.906 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9241], 00:25:03.906 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11338], 95.00th=[11863], 00:25:03.906 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14746], 99.95th=[14877], 00:25:03.906 | 99.99th=[15008] 00:25:03.906 bw ( KiB/s): min=58624, max=80416, per=49.26%, avg=72032.00, stdev=9357.54, samples=4 00:25:03.906 iops : min= 3664, max= 5026, avg=4502.00, stdev=584.85, samples=4 00:25:03.906 write: IOPS=5446, BW=85.1MiB/s (89.2MB/s)(147MiB/1725msec); 0 zone resets 00:25:03.906 slat (usec): min=39, max=406, avg=41.22, stdev= 8.10 00:25:03.906 clat (usec): min=3940, max=16363, avg=9521.28, stdev=1569.37 00:25:03.906 lat (usec): min=3980, max=16403, avg=9562.49, stdev=1571.07 00:25:03.906 clat percentiles (usec): 00:25:03.906 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8160], 00:25:03.906 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:25:03.906 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[12256], 00:25:03.906 | 99.00th=[14091], 99.50th=[15008], 99.90th=[16057], 99.95th=[16188], 00:25:03.906 | 99.99th=[16319] 00:25:03.906 bw ( KiB/s): min=61728, max=83392, per=86.25%, avg=75168.00, stdev=9344.99, samples=4 00:25:03.906 iops : min= 3858, max= 5212, avg=4698.00, stdev=584.06, samples=4 00:25:03.906 lat (msec) : 4=0.33%, 10=68.92%, 20=30.75% 00:25:03.907 cpu : usr=82.57%, sys=14.54%, ctx=16, majf=0, minf=30 00:25:03.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:03.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:03.907 issued rwts: total=18361,9396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:03.907 00:25:03.907 Run status group 0 (all jobs): 00:25:03.907 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=287MiB (301MB), run=2009-2009msec 00:25:03.907 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=147MiB (154MB), run=1725-1725msec 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.907 rmmod nvme_tcp 00:25:03.907 rmmod nvme_fabrics 00:25:03.907 rmmod nvme_keyring 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 800552 ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 800552 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 800552 ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 800552 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800552 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800552' 00:25:03.907 killing process with pid 800552 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 800552 00:25:03.907 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 800552 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.167 13:10:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.124 13:10:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.124 00:25:06.124 real 0m18.106s 00:25:06.124 user 1m6.901s 00:25:06.124 sys 0m7.983s 00:25:06.124 13:10:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:06.124 13:10:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.124 ************************************ 00:25:06.124 END TEST nvmf_fio_host 00:25:06.124 ************************************ 00:25:06.124 13:10:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:06.124 13:10:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.124 13:10:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:06.124 13:10:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:06.124 13:10:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:06.124 ************************************ 00:25:06.124 START TEST nvmf_failover 00:25:06.124 ************************************ 00:25:06.124 13:10:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:06.385 * Looking for test storage... 00:25:06.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.385 13:10:27 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.385 13:10:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.385 13:10:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.524 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:14.525 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:14.525 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:14.525 Found net devices under 0000:31:00.0: cvl_0_0 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:14.525 Found net devices under 0000:31:00.1: cvl_0_1 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.525 13:10:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:14.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:25:14.525 00:25:14.525 --- 10.0.0.2 ping statistics --- 00:25:14.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.525 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:25:14.525 00:25:14.525 --- 10.0.0.1 ping statistics --- 00:25:14.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.525 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=807133 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 807133 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 807133 ']' 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.525 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 [2024-07-15 13:10:36.220390] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:25:14.525 [2024-07-15 13:10:36.220468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.525 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.525 [2024-07-15 13:10:36.322489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.785 [2024-07-15 13:10:36.415898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.785 [2024-07-15 13:10:36.415959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.785 [2024-07-15 13:10:36.415967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.785 [2024-07-15 13:10:36.415974] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.785 [2024-07-15 13:10:36.415980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.785 [2024-07-15 13:10:36.416115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.785 [2024-07-15 13:10:36.416430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.785 [2024-07-15 13:10:36.416527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.356 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:15.356 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:15.356 13:10:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:15.356 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:15.356 13:10:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.356 13:10:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.356 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:15.356 [2024-07-15 13:10:37.174260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.616 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.616 Malloc0 00:25:15.616 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.877 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.139 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.139 [2024-07-15 13:10:37.880781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.139 13:10:37 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.400 [2024-07-15 13:10:38.049214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.400 13:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.400 [2024-07-15 13:10:38.213736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=807501 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 807501 /var/tmp/bdevperf.sock 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 807501 ']' 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.661 13:10:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:17.603 13:10:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.603 13:10:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:17.603 13:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.603 NVMe0n1 00:25:17.603 13:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:17.863 00:25:17.863 13:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=807831 00:25:17.863 13:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.863 13:10:39 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.803 13:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.063 [2024-07-15 13:10:40.726430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.063 [2024-07-15 13:10:40.726551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.064 [2024-07-15 13:10:40.726930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.726997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 [2024-07-15 13:10:40.727032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1776770 is same with the state(5) to be set 00:25:19.065 13:10:40 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:22.363 13:10:43 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.363 00:25:22.363 13:10:44 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.725 [2024-07-15 13:10:44.305922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.305998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.306003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.306007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.306012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.725 [2024-07-15 13:10:44.306016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 [2024-07-15 13:10:44.306098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1777e70 is same with the state(5) to be set 00:25:22.726 13:10:44 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:26.026 13:10:47 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.026 [2024-07-15 13:10:47.481848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.026 13:10:47 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:26.968 13:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:26.968 [2024-07-15 13:10:48.661461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 [2024-07-15 13:10:48.661568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1778550 is same with the state(5) to be set 00:25:26.968 13:10:48 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 807831 00:25:33.577 0 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 807501 ']' 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807501' 00:25:33.577 killing process with pid 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 807501 00:25:33.577 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.577 [2024-07-15 13:10:38.282952] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:25:33.577 [2024-07-15 13:10:38.283009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807501 ] 00:25:33.577 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.577 [2024-07-15 13:10:38.348645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.577 [2024-07-15 13:10:38.413377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.577 Running I/O for 15 seconds... 00:25:33.577 [2024-07-15 13:10:40.727617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.577 [2024-07-15 13:10:40.727928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.577 [2024-07-15 13:10:40.727937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.727945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.727954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.727961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.727970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.727977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.727986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.727993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.578 [2024-07-15 13:10:40.728556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.578 [2024-07-15 13:10:40.728565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.728991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.728998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.579 [2024-07-15 13:10:40.729129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.579 [2024-07-15 13:10:40.729146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.579 [2024-07-15 13:10:40.729155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.579 [2024-07-15 13:10:40.729162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.580 [2024-07-15 13:10:40.729631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.580 [2024-07-15 13:10:40.729730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.580 [2024-07-15 13:10:40.729761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.580 [2024-07-15 13:10:40.729768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:25:33.580 [2024-07-15 13:10:40.729776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.580 [2024-07-15 13:10:40.729813] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2149df0 was disconnected and freed. reset controller. 00:25:33.581 [2024-07-15 13:10:40.729822] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.581 [2024-07-15 13:10:40.729843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:40.729851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:40.729859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:40.729866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:40.729874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:40.729881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:40.729889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:40.729896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:40.729903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.581 [2024-07-15 13:10:40.729941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214dea0 (9): Bad file descriptor 00:25:33.581 [2024-07-15 13:10:40.733517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.581 [2024-07-15 13:10:40.772316] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:33.581 [2024-07-15 13:10:44.307552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:44.307587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:44.307604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:44.307623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.581 [2024-07-15 13:10:44.307638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214dea0 is same with the state(5) to be set 00:25:33.581 [2024-07-15 13:10:44.307703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.307986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.307995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.581 [2024-07-15 13:10:44.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.581 [2024-07-15 13:10:44.308157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.582 [2024-07-15 13:10:44.308559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.582 [2024-07-15 13:10:44.308639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.582 [2024-07-15 13:10:44.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.308987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.583 [2024-07-15 13:10:44.309247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.583 [2024-07-15 13:10:44.309255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.584 [2024-07-15 13:10:44.309530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.584 [2024-07-15 13:10:44.309793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.584 [2024-07-15 13:10:44.309817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.584 [2024-07-15 13:10:44.309824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37792 len:8 PRP1 0x0 PRP2 0x0 00:25:33.584 [2024-07-15 13:10:44.309832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.584 [2024-07-15 13:10:44.309866] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x217c960 was disconnected and freed. reset controller. 00:25:33.584 [2024-07-15 13:10:44.309874] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:33.584 [2024-07-15 13:10:44.309883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.584 [2024-07-15 13:10:44.313421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.585 [2024-07-15 13:10:44.313445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214dea0 (9): Bad file descriptor 00:25:33.585 [2024-07-15 13:10:44.353061] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:33.585 [2024-07-15 13:10:48.661736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.661988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.585 [2024-07-15 13:10:48.662343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.585 [2024-07-15 13:10:48.662352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.586 [2024-07-15 13:10:48.662893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.586 [2024-07-15 13:10:48.662901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.662991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.662998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.587 [2024-07-15 13:10:48.663419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.587 [2024-07-15 13:10:48.663426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.588 [2024-07-15 13:10:48.663752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.588 [2024-07-15 13:10:48.663848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.588 [2024-07-15 13:10:48.663875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.588 [2024-07-15 13:10:48.663882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48104 len:8 PRP1 0x0 PRP2 0x0 00:25:33.588 [2024-07-15 13:10:48.663889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663927] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218a5b0 was disconnected and freed. reset controller. 00:25:33.588 [2024-07-15 13:10:48.663937] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:33.588 [2024-07-15 13:10:48.663956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.588 [2024-07-15 13:10:48.663964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.588 [2024-07-15 13:10:48.663982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.663990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.588 [2024-07-15 13:10:48.663997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.664005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.588 [2024-07-15 13:10:48.664013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.588 [2024-07-15 13:10:48.664020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:33.588 [2024-07-15 13:10:48.667614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:33.588 [2024-07-15 13:10:48.667640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214dea0 (9): Bad file descriptor 00:25:33.589 [2024-07-15 13:10:48.706882] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:33.589 00:25:33.589 Latency(us) 00:25:33.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.589 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.589 Verification LBA range: start 0x0 length 0x4000 00:25:33.589 NVMe0n1 : 15.01 11276.08 44.05 259.38 0.00 11068.56 764.59 14636.37 00:25:33.589 =================================================================================================================== 00:25:33.589 Total : 11276.08 44.05 259.38 0.00 11068.56 764.59 14636.37 00:25:33.589 Received shutdown signal, test time was about 15.000000 seconds 00:25:33.589 00:25:33.589 Latency(us) 00:25:33.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.589 =================================================================================================================== 00:25:33.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=810700 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 810700 /var/tmp/bdevperf.sock 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 810700 ']' 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.589 13:10:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.161 13:10:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.161 13:10:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:34.161 13:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:34.161 [2024-07-15 13:10:55.883258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:34.161 13:10:55 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:34.422 [2024-07-15 13:10:56.047675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:34.422 13:10:56 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.683 NVMe0n1 00:25:34.683 13:10:56 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.945 00:25:34.945 13:10:56 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.517 00:25:35.517 13:10:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.517 13:10:57 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:35.517 13:10:57 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.778 13:10:57 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:39.081 13:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.081 13:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:39.081 13:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=811869 00:25:39.081 13:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.081 13:11:00 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 811869 00:25:40.021 0 00:25:40.021 13:11:01 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.021 [2024-07-15 13:10:54.975444] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:25:40.021 [2024-07-15 13:10:54.975502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810700 ] 00:25:40.021 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.021 [2024-07-15 13:10:55.041726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.021 [2024-07-15 13:10:55.105739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.021 [2024-07-15 13:10:57.395664] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:40.021 [2024-07-15 13:10:57.395709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.021 [2024-07-15 13:10:57.395720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.021 [2024-07-15 13:10:57.395729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.021 [2024-07-15 13:10:57.395737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.021 [2024-07-15 13:10:57.395745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.022 [2024-07-15 13:10:57.395752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.022 [2024-07-15 13:10:57.395760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.022 [2024-07-15 13:10:57.395767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.022 [2024-07-15 13:10:57.395773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.022 [2024-07-15 13:10:57.395799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.022 [2024-07-15 13:10:57.395814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b2ea0 (9): Bad file descriptor 00:25:40.022 [2024-07-15 13:10:57.407068] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:40.022 Running I/O for 1 seconds... 00:25:40.022 00:25:40.022 Latency(us) 00:25:40.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.022 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:40.022 Verification LBA range: start 0x0 length 0x4000 00:25:40.022 NVMe0n1 : 1.01 11044.53 43.14 0.00 0.00 11537.20 2553.17 14527.15 00:25:40.022 =================================================================================================================== 00:25:40.022 Total : 11044.53 43.14 0.00 0.00 11537.20 2553.17 14527.15 00:25:40.022 13:11:01 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.022 13:11:01 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:40.282 13:11:01 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.282 13:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.282 13:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:40.542 13:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.803 13:11:02 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 810700 ']' 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 810700' 00:25:44.105 killing process with pid 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 810700 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:44.105 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:44.369 rmmod nvme_tcp 00:25:44.369 rmmod nvme_fabrics 00:25:44.369 rmmod nvme_keyring 00:25:44.369 13:11:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 807133 ']' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 807133 ']' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807133' 00:25:44.369 killing process with pid 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 807133 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.369 13:11:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.918 13:11:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:46.918 00:25:46.918 real 0m40.356s 00:25:46.918 user 2m2.082s 00:25:46.918 sys 0m8.739s 00:25:46.918 13:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:46.918 13:11:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:46.918 ************************************ 00:25:46.918 END TEST nvmf_failover 00:25:46.918 ************************************ 00:25:46.918 13:11:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:46.918 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.918 13:11:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:46.918 13:11:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.918 13:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:46.918 ************************************ 00:25:46.918 START TEST nvmf_host_discovery 00:25:46.918 ************************************ 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:46.918 * Looking for test storage... 00:25:46.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:46.918 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:46.919 13:11:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:55.082 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:55.082 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:55.082 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:55.083 Found net devices under 0000:31:00.0: cvl_0_0 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:55.083 Found net devices under 0000:31:00.1: cvl_0_1 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:55.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:25:55.083 00:25:55.083 --- 10.0.0.2 ping statistics --- 00:25:55.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.083 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:25:55.083 00:25:55.083 --- 10.0.0.1 ping statistics --- 00:25:55.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.083 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=817551 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 817551 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 817551 ']' 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.083 13:11:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.083 [2024-07-15 13:11:16.720572] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:25:55.083 [2024-07-15 13:11:16.720640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.083 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.083 [2024-07-15 13:11:16.816702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.343 [2024-07-15 13:11:16.908364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.343 [2024-07-15 13:11:16.908422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.343 [2024-07-15 13:11:16.908430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.343 [2024-07-15 13:11:16.908437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.343 [2024-07-15 13:11:16.908443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.343 [2024-07-15 13:11:16.908476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 [2024-07-15 13:11:17.551316] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 [2024-07-15 13:11:17.563536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 null0 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 null1 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=817593 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 817593 /tmp/host.sock 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 817593 ']' 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:55.914 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:55.914 13:11:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.914 [2024-07-15 13:11:17.659070] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:25:55.914 [2024-07-15 13:11:17.659132] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid817593 ] 00:25:55.914 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.914 [2024-07-15 13:11:17.730468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.175 [2024-07-15 13:11:17.804997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.744 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.745 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.006 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.007 [2024-07-15 13:11:18.786622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.007 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.267 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:57.268 13:11:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:57.839 [2024-07-15 13:11:19.491438] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:57.839 [2024-07-15 13:11:19.491458] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:57.839 [2024-07-15 13:11:19.491472] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.839 [2024-07-15 13:11:19.579747] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:58.100 [2024-07-15 13:11:19.682346] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.100 [2024-07-15 13:11:19.682369] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.361 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:58.622 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.623 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.883 [2024-07-15 13:11:20.491242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.883 [2024-07-15 13:11:20.491946] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:58.883 [2024-07-15 13:11:20.491975] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.883 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.884 [2024-07-15 13:11:20.621798] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.884 13:11:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:59.144 [2024-07-15 13:11:20.725566] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:59.144 [2024-07-15 13:11:20.725583] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:59.144 [2024-07-15 13:11:20.725589] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.086 [2024-07-15 13:11:21.770645] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:00.086 [2024-07-15 13:11:21.770668] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.086 [2024-07-15 13:11:21.774558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.086 [2024-07-15 13:11:21.774578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-07-15 13:11:21.774588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.086 [2024-07-15 13:11:21.774595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-07-15 13:11:21.774603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.086 [2024-07-15 13:11:21.774610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-07-15 13:11:21.774618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.086 [2024-07-15 13:11:21.774630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-07-15 13:11:21.774638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.086 [2024-07-15 13:11:21.784573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.086 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.086 [2024-07-15 13:11:21.794610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.086 [2024-07-15 13:11:21.794989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.086 [2024-07-15 13:11:21.795003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.086 [2024-07-15 13:11:21.795012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.086 [2024-07-15 13:11:21.795025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.086 [2024-07-15 13:11:21.795043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.086 [2024-07-15 13:11:21.795050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.086 [2024-07-15 13:11:21.795058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.086 [2024-07-15 13:11:21.795070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.086 [2024-07-15 13:11:21.804664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.086 [2024-07-15 13:11:21.805007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.086 [2024-07-15 13:11:21.805018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.086 [2024-07-15 13:11:21.805026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.086 [2024-07-15 13:11:21.805036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.086 [2024-07-15 13:11:21.805046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.086 [2024-07-15 13:11:21.805052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.086 [2024-07-15 13:11:21.805063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.086 [2024-07-15 13:11:21.805073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.086 [2024-07-15 13:11:21.814715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.086 [2024-07-15 13:11:21.815021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.086 [2024-07-15 13:11:21.815033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.087 [2024-07-15 13:11:21.815040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.087 [2024-07-15 13:11:21.815051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.087 [2024-07-15 13:11:21.815061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.087 [2024-07-15 13:11:21.815067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.087 [2024-07-15 13:11:21.815073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.087 [2024-07-15 13:11:21.815084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.087 [2024-07-15 13:11:21.824769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.087 [2024-07-15 13:11:21.824858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.087 [2024-07-15 13:11:21.824869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.087 [2024-07-15 13:11:21.824876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.087 [2024-07-15 13:11:21.824887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.087 [2024-07-15 13:11:21.824898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.087 [2024-07-15 13:11:21.824905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.087 [2024-07-15 13:11:21.824913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.087 [2024-07-15 13:11:21.824923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.087 [2024-07-15 13:11:21.834822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.087 [2024-07-15 13:11:21.835154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.087 [2024-07-15 13:11:21.835165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.087 [2024-07-15 13:11:21.835172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.087 [2024-07-15 13:11:21.835183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.087 [2024-07-15 13:11:21.835199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.087 [2024-07-15 13:11:21.835206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.087 [2024-07-15 13:11:21.835213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.087 [2024-07-15 13:11:21.835224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.087 [2024-07-15 13:11:21.844878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.087 [2024-07-15 13:11:21.845123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.087 [2024-07-15 13:11:21.845137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.087 [2024-07-15 13:11:21.845145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.087 [2024-07-15 13:11:21.845156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.087 [2024-07-15 13:11:21.845166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.087 [2024-07-15 13:11:21.845172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.087 [2024-07-15 13:11:21.845179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.087 [2024-07-15 13:11:21.845189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.087 [2024-07-15 13:11:21.854932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:00.087 [2024-07-15 13:11:21.855263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.087 [2024-07-15 13:11:21.855277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb9a0 with addr=10.0.0.2, port=4420 00:26:00.087 [2024-07-15 13:11:21.855284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb9a0 is same with the state(5) to be set 00:26:00.087 [2024-07-15 13:11:21.855296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb9a0 (9): Bad file descriptor 00:26:00.087 [2024-07-15 13:11:21.855313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:00.087 [2024-07-15 13:11:21.855319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:00.087 [2024-07-15 13:11:21.855326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:00.087 [2024-07-15 13:11:21.855337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.087 [2024-07-15 13:11:21.859329] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:00.087 [2024-07-15 13:11:21.859347] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:00.087 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.348 13:11:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.348 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.349 13:11:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.736 [2024-07-15 13:11:23.221407] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.736 [2024-07-15 13:11:23.221424] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.736 [2024-07-15 13:11:23.221436] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.736 [2024-07-15 13:11:23.350833] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.997 [2024-07-15 13:11:23.618389] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.997 [2024-07-15 13:11:23.618418] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 request: 00:26:01.997 { 00:26:01.997 "name": "nvme", 00:26:01.997 "trtype": "tcp", 00:26:01.997 "traddr": "10.0.0.2", 00:26:01.997 "adrfam": "ipv4", 00:26:01.997 "trsvcid": "8009", 00:26:01.997 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.997 "wait_for_attach": true, 00:26:01.997 "method": "bdev_nvme_start_discovery", 00:26:01.997 "req_id": 1 00:26:01.997 } 00:26:01.997 Got JSON-RPC error response 00:26:01.997 response: 00:26:01.997 { 00:26:01.997 "code": -17, 00:26:01.997 "message": "File exists" 00:26:01.997 } 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 request: 00:26:01.997 { 00:26:01.997 "name": "nvme_second", 00:26:01.997 "trtype": "tcp", 00:26:01.997 "traddr": "10.0.0.2", 00:26:01.997 "adrfam": "ipv4", 00:26:01.997 "trsvcid": "8009", 00:26:01.997 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.997 "wait_for_attach": true, 00:26:01.997 "method": "bdev_nvme_start_discovery", 00:26:01.997 "req_id": 1 00:26:01.997 } 00:26:01.997 Got JSON-RPC error response 00:26:01.997 response: 00:26:01.997 { 00:26:01.997 "code": -17, 00:26:01.997 "message": "File exists" 00:26:01.997 } 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.997 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.271 13:11:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.211 [2024-07-15 13:11:24.869959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.211 [2024-07-15 13:11:24.869989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe7590 with addr=10.0.0.2, port=8010 00:26:03.211 [2024-07-15 13:11:24.870003] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.211 [2024-07-15 13:11:24.870010] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.211 [2024-07-15 13:11:24.870017] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.199 [2024-07-15 13:11:25.872449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.199 [2024-07-15 13:11:25.872487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe79d0 with addr=10.0.0.2, port=8010 00:26:04.199 [2024-07-15 13:11:25.872502] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:04.199 [2024-07-15 13:11:25.872509] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:04.199 [2024-07-15 13:11:25.872516] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:05.137 [2024-07-15 13:11:26.874284] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:05.137 request: 00:26:05.137 { 00:26:05.137 "name": "nvme_second", 00:26:05.137 "trtype": "tcp", 00:26:05.137 "traddr": "10.0.0.2", 00:26:05.137 "adrfam": "ipv4", 00:26:05.137 "trsvcid": "8010", 00:26:05.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:05.137 "wait_for_attach": false, 00:26:05.137 "attach_timeout_ms": 3000, 00:26:05.137 "method": "bdev_nvme_start_discovery", 00:26:05.137 "req_id": 1 00:26:05.137 } 00:26:05.137 Got JSON-RPC error response 00:26:05.137 response: 00:26:05.137 { 00:26:05.137 "code": -110, 00:26:05.137 "message": "Connection timed out" 00:26:05.137 } 00:26:05.137 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:05.137 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 817593 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.138 13:11:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.138 rmmod nvme_tcp 00:26:05.138 rmmod nvme_fabrics 00:26:05.397 rmmod nvme_keyring 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 817551 ']' 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 817551 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 817551 ']' 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 817551 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 817551 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 817551' 00:26:05.397 killing process with pid 817551 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 817551 00:26:05.397 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 817551 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.398 13:11:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:07.938 00:26:07.938 real 0m20.917s 00:26:07.938 user 0m23.793s 00:26:07.938 sys 0m7.527s 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.938 ************************************ 00:26:07.938 END TEST nvmf_host_discovery 00:26:07.938 ************************************ 00:26:07.938 13:11:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:07.938 13:11:29 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.938 13:11:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:07.938 13:11:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.938 13:11:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.938 ************************************ 00:26:07.938 START TEST nvmf_host_multipath_status 00:26:07.938 ************************************ 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.938 * Looking for test storage... 00:26:07.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.938 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.939 13:11:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.104 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:16.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:16.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:16.105 Found net devices under 0000:31:00.0: cvl_0_0 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:16.105 Found net devices under 0000:31:00.1: cvl_0_1 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:26:16.105 00:26:16.105 --- 10.0.0.2 ping statistics --- 00:26:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.105 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:26:16.105 00:26:16.105 --- 10.0.0.1 ping statistics --- 00:26:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.105 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=824299 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 824299 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 824299 ']' 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.105 13:11:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 [2024-07-15 13:11:37.622030] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:26:16.105 [2024-07-15 13:11:37.622094] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.105 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.105 [2024-07-15 13:11:37.702512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:16.105 [2024-07-15 13:11:37.777832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.105 [2024-07-15 13:11:37.777875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.106 [2024-07-15 13:11:37.777882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.106 [2024-07-15 13:11:37.777888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.106 [2024-07-15 13:11:37.777894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.106 [2024-07-15 13:11:37.778028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.106 [2024-07-15 13:11:37.778030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=824299 00:26:16.675 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:16.935 [2024-07-15 13:11:38.574074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.935 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:16.935 Malloc0 00:26:16.935 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:17.195 13:11:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.455 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.455 [2024-07-15 13:11:39.180978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.455 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:17.714 [2024-07-15 13:11:39.321326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=824655 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 824655 /var/tmp/bdevperf.sock 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 824655 ']' 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.714 13:11:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:18.652 13:11:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.652 13:11:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:18.652 13:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:18.652 13:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:18.955 Nvme0n1 00:26:18.955 13:11:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:19.215 Nvme0n1 00:26:19.474 13:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:19.474 13:11:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:21.381 13:11:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:21.381 13:11:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:21.641 13:11:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.641 13:11:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.023 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.284 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.284 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.284 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.284 13:11:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.284 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.284 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.545 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.806 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.806 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:23.806 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.806 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.067 13:11:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:25.038 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:25.038 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:25.038 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.038 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.299 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.299 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.299 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.299 13:11:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.559 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.820 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.080 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.080 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:26.080 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.341 13:11:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:26.341 13:11:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.723 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.724 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.724 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.724 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.724 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.724 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.984 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.984 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.984 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.984 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.244 13:11:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.504 13:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.504 13:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:28.504 13:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:28.504 13:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.764 13:11:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:29.706 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:29.706 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.706 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.706 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.967 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.967 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.967 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.967 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.227 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.227 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.227 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.227 13:11:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.227 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.227 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.227 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.227 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.487 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.487 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.487 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.487 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:30.746 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:31.005 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:31.265 13:11:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:32.204 13:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:32.204 13:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.204 13:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.204 13:11:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.464 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.725 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.725 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.725 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.726 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.987 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.247 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.247 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:33.247 13:11:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:33.247 13:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.508 13:11:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:34.449 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:34.449 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:34.449 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.449 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.709 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.709 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.709 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.709 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.972 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.233 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.233 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:35.233 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.233 13:11:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.495 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:35.756 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:35.756 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:36.017 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.017 13:11:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:36.961 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:36.961 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.222 13:11:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.483 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.744 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.744 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.744 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.744 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:38.005 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.265 13:11:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.582 13:12:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.545 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.805 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.805 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.805 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.805 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.066 13:12:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.326 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.326 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.326 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.326 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.586 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.586 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:40.586 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.586 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:40.846 13:12:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:41.785 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:41.785 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.785 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.785 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.045 13:12:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.304 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.304 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.304 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.304 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.564 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.824 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.824 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:42.824 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.084 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:43.084 13:12:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:44.468 13:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:44.468 13:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.468 13:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.468 13:12:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.468 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.468 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:44.468 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.468 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.468 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.469 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.469 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.469 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.728 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.729 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.729 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.729 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.988 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.248 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.248 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 824655 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 824655 ']' 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 824655 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 824655 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 824655' 00:26:45.249 killing process with pid 824655 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 824655 00:26:45.249 13:12:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 824655 00:26:45.249 Connection closed with partial response: 00:26:45.249 00:26:45.249 00:26:45.512 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 824655 00:26:45.512 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.512 [2024-07-15 13:11:39.383387] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:26:45.512 [2024-07-15 13:11:39.383442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824655 ] 00:26:45.512 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.512 [2024-07-15 13:11:39.439635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.512 [2024-07-15 13:11:39.491679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.512 Running I/O for 90 seconds... 00:26:45.512 [2024-07-15 13:11:52.674130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.512 [2024-07-15 13:11:52.674446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.512 [2024-07-15 13:11:52.674534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.512 [2024-07-15 13:11:52.674539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.674985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.674990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.513 [2024-07-15 13:11:52.675442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.513 [2024-07-15 13:11:52.675461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.513 [2024-07-15 13:11:52.675480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.513 [2024-07-15 13:11:52.675494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.514 [2024-07-15 13:11:52.675499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.514 [2024-07-15 13:11:52.675519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.514 [2024-07-15 13:11:52.675539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.514 [2024-07-15 13:11:52.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.514 [2024-07-15 13:11:52.675578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.675980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.675986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.676622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.676627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.677209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.677215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.514 [2024-07-15 13:11:52.677234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.514 [2024-07-15 13:11:52.677239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:11:52.677732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:11:52.677749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:11:52.677755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.854523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.854558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.854574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.854605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.854625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.854635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.854640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.855059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.855076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.515 [2024-07-15 13:12:04.855092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.855108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.855123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.855138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.855153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.515 [2024-07-15 13:12:04.855163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.515 [2024-07-15 13:12:04.855168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.515 Received shutdown signal, test time was about 25.782297 seconds 00:26:45.515 00:26:45.515 Latency(us) 00:26:45.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.515 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:45.515 Verification LBA range: start 0x0 length 0x4000 00:26:45.515 Nvme0n1 : 25.78 10905.26 42.60 0.00 0.00 11718.11 397.65 3019898.88 00:26:45.515 =================================================================================================================== 00:26:45.515 Total : 10905.26 42.60 0.00 0.00 11718.11 397.65 3019898.88 00:26:45.515 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.515 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:45.515 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.515 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:45.515 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.516 rmmod nvme_tcp 00:26:45.516 rmmod nvme_fabrics 00:26:45.516 rmmod nvme_keyring 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 824299 ']' 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 824299 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 824299 ']' 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 824299 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.516 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 824299 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 824299' 00:26:45.776 killing process with pid 824299 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 824299 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 824299 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.776 13:12:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.323 13:12:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.323 00:26:48.323 real 0m40.257s 00:26:48.323 user 1m41.982s 00:26:48.323 sys 0m11.330s 00:26:48.323 13:12:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:48.323 13:12:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.323 ************************************ 00:26:48.323 END TEST nvmf_host_multipath_status 00:26:48.323 ************************************ 00:26:48.323 13:12:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:48.323 13:12:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.323 13:12:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:48.323 13:12:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.323 13:12:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.323 ************************************ 00:26:48.323 START TEST nvmf_discovery_remove_ifc 00:26:48.323 ************************************ 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.323 * Looking for test storage... 00:26:48.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.323 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.324 13:12:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:56.468 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:56.468 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:56.468 Found net devices under 0000:31:00.0: cvl_0_0 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.468 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:56.469 Found net devices under 0000:31:00.1: cvl_0_1 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:26:56.469 00:26:56.469 --- 10.0.0.2 ping statistics --- 00:26:56.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.469 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:56.469 00:26:56.469 --- 10.0.0.1 ping statistics --- 00:26:56.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.469 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=835318 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 835318 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 835318 ']' 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.469 13:12:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.469 [2024-07-15 13:12:17.989750] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:26:56.469 [2024-07-15 13:12:17.989813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.469 [2024-07-15 13:12:18.087169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.469 [2024-07-15 13:12:18.180652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.469 [2024-07-15 13:12:18.180705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.469 [2024-07-15 13:12:18.180713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.469 [2024-07-15 13:12:18.180720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.469 [2024-07-15 13:12:18.180726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.469 [2024-07-15 13:12:18.180751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.042 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.042 [2024-07-15 13:12:18.822772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.042 [2024-07-15 13:12:18.830982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:57.042 null0 00:26:57.042 [2024-07-15 13:12:18.862958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.303 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.303 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=835600 00:26:57.303 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 835600 /tmp/host.sock 00:26:57.303 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:57.303 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 835600 ']' 00:26:57.304 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:57.304 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.304 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:57.304 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:57.304 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.304 13:12:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.304 [2024-07-15 13:12:18.944519] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:26:57.304 [2024-07-15 13:12:18.944583] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835600 ] 00:26:57.304 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.304 [2024-07-15 13:12:19.015410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.304 [2024-07-15 13:12:19.088886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.246 13:12:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.189 [2024-07-15 13:12:20.810549] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:59.189 [2024-07-15 13:12:20.810574] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:59.189 [2024-07-15 13:12:20.810587] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.189 [2024-07-15 13:12:20.898856] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:59.189 [2024-07-15 13:12:21.004475] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:59.189 [2024-07-15 13:12:21.004525] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:59.189 [2024-07-15 13:12:21.004547] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:59.189 [2024-07-15 13:12:21.004563] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.189 [2024-07-15 13:12:21.004584] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.189 [2024-07-15 13:12:21.010493] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e0d500 was disconnected and freed. delete nvme_qpair. 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.189 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.449 13:12:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.831 13:12:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.770 13:12:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.710 13:12:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.648 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.908 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.908 13:12:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.846 [2024-07-15 13:12:26.445069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:04.846 [2024-07-15 13:12:26.445110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.846 [2024-07-15 13:12:26.445123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.846 [2024-07-15 13:12:26.445132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.846 [2024-07-15 13:12:26.445139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.846 [2024-07-15 13:12:26.445148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.846 [2024-07-15 13:12:26.445155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.846 [2024-07-15 13:12:26.445163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.846 [2024-07-15 13:12:26.445170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.846 [2024-07-15 13:12:26.445178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.846 [2024-07-15 13:12:26.445185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.846 [2024-07-15 13:12:26.445193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd40a0 is same with the state(5) to be set 00:27:04.846 [2024-07-15 13:12:26.455090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd40a0 (9): Bad file descriptor 00:27:04.846 [2024-07-15 13:12:26.465130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.846 13:12:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:05.787 [2024-07-15 13:12:27.473268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:05.787 [2024-07-15 13:12:27.473312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd40a0 with addr=10.0.0.2, port=4420 00:27:05.787 [2024-07-15 13:12:27.473326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd40a0 is same with the state(5) to be set 00:27:05.787 [2024-07-15 13:12:27.473354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd40a0 (9): Bad file descriptor 00:27:05.787 [2024-07-15 13:12:27.473723] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:05.787 [2024-07-15 13:12:27.473741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:05.787 [2024-07-15 13:12:27.473748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:05.787 [2024-07-15 13:12:27.473758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:05.787 [2024-07-15 13:12:27.473774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.787 [2024-07-15 13:12:27.473782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:05.787 13:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.787 13:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.787 13:12:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.729 [2024-07-15 13:12:28.476155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:06.729 [2024-07-15 13:12:28.476174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:06.729 [2024-07-15 13:12:28.476182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:06.729 [2024-07-15 13:12:28.476189] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:06.729 [2024-07-15 13:12:28.476202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.729 [2024-07-15 13:12:28.476220] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:06.729 [2024-07-15 13:12:28.476248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.729 [2024-07-15 13:12:28.476259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.729 [2024-07-15 13:12:28.476269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.729 [2024-07-15 13:12:28.476277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.729 [2024-07-15 13:12:28.476285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.729 [2024-07-15 13:12:28.476292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.729 [2024-07-15 13:12:28.476301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.729 [2024-07-15 13:12:28.476308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.729 [2024-07-15 13:12:28.476316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.729 [2024-07-15 13:12:28.476323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.729 [2024-07-15 13:12:28.476336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:06.729 [2024-07-15 13:12:28.476699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd3520 (9): Bad file descriptor 00:27:06.729 [2024-07-15 13:12:28.477712] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:06.729 [2024-07-15 13:12:28.477723] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.729 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.989 13:12:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.929 13:12:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.872 [2024-07-15 13:12:30.537365] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.872 [2024-07-15 13:12:30.537387] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.872 [2024-07-15 13:12:30.537406] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.872 [2024-07-15 13:12:30.665812] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.132 [2024-07-15 13:12:30.767481] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:09.132 [2024-07-15 13:12:30.767519] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:09.132 [2024-07-15 13:12:30.767538] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:09.132 [2024-07-15 13:12:30.767552] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:09.132 [2024-07-15 13:12:30.767560] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:09.132 [2024-07-15 13:12:30.773848] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e16cb0 was disconnected and freed. delete nvme_qpair. 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:09.132 13:12:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.072 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.072 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.072 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 835600 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 835600 ']' 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 835600 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835600 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835600' 00:27:10.073 killing process with pid 835600 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 835600 00:27:10.073 13:12:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 835600 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:10.333 rmmod nvme_tcp 00:27:10.333 rmmod nvme_fabrics 00:27:10.333 rmmod nvme_keyring 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 835318 ']' 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 835318 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 835318 ']' 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 835318 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835318 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835318' 00:27:10.333 killing process with pid 835318 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 835318 00:27:10.333 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 835318 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.594 13:12:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.136 13:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:13.136 00:27:13.136 real 0m24.667s 00:27:13.136 user 0m29.148s 00:27:13.136 sys 0m7.368s 00:27:13.136 13:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:13.136 13:12:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.136 ************************************ 00:27:13.136 END TEST nvmf_discovery_remove_ifc 00:27:13.136 ************************************ 00:27:13.136 13:12:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:13.136 13:12:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:13.136 13:12:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:13.136 13:12:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.136 13:12:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.136 ************************************ 00:27:13.136 START TEST nvmf_identify_kernel_target 00:27:13.136 ************************************ 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:13.136 * Looking for test storage... 00:27:13.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.136 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:13.137 13:12:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.408 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:21.409 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:21.409 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:21.409 Found net devices under 0000:31:00.0: cvl_0_0 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:21.409 Found net devices under 0000:31:00.1: cvl_0_1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:27:21.409 00:27:21.409 --- 10.0.0.2 ping statistics --- 00:27:21.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.409 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:27:21.409 00:27:21.409 --- 10.0.0.1 ping statistics --- 00:27:21.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.409 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:21.409 13:12:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.706 Waiting for block devices as requested 00:27:24.706 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.966 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.966 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.966 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.227 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.227 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.227 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:25.513 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:25.513 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:25.513 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:25.773 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:25.773 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:25.773 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.773 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:26.033 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:26.033 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:26.033 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:26.033 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:26.033 No valid GPT data, bailing 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:26.294 00:27:26.294 Discovery Log Number of Records 2, Generation counter 2 00:27:26.294 =====Discovery Log Entry 0====== 00:27:26.294 trtype: tcp 00:27:26.294 adrfam: ipv4 00:27:26.294 subtype: current discovery subsystem 00:27:26.294 treq: not specified, sq flow control disable supported 00:27:26.294 portid: 1 00:27:26.294 trsvcid: 4420 00:27:26.294 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:26.294 traddr: 10.0.0.1 00:27:26.294 eflags: none 00:27:26.294 sectype: none 00:27:26.294 =====Discovery Log Entry 1====== 00:27:26.294 trtype: tcp 00:27:26.294 adrfam: ipv4 00:27:26.294 subtype: nvme subsystem 00:27:26.294 treq: not specified, sq flow control disable supported 00:27:26.294 portid: 1 00:27:26.294 trsvcid: 4420 00:27:26.294 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:26.294 traddr: 10.0.0.1 00:27:26.294 eflags: none 00:27:26.294 sectype: none 00:27:26.294 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:26.294 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:26.294 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.294 ===================================================== 00:27:26.294 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:26.294 ===================================================== 00:27:26.294 Controller Capabilities/Features 00:27:26.294 ================================ 00:27:26.294 Vendor ID: 0000 00:27:26.294 Subsystem Vendor ID: 0000 00:27:26.294 Serial Number: 993708de2491a80500e2 00:27:26.294 Model Number: Linux 00:27:26.294 Firmware Version: 6.7.0-68 00:27:26.294 Recommended Arb Burst: 0 00:27:26.294 IEEE OUI Identifier: 00 00 00 00:27:26.294 Multi-path I/O 00:27:26.294 May have multiple subsystem ports: No 00:27:26.294 May have multiple controllers: No 00:27:26.294 Associated with SR-IOV VF: No 00:27:26.294 Max Data Transfer Size: Unlimited 00:27:26.294 Max Number of Namespaces: 0 00:27:26.294 Max Number of I/O Queues: 1024 00:27:26.294 NVMe Specification Version (VS): 1.3 00:27:26.294 NVMe Specification Version (Identify): 1.3 00:27:26.294 Maximum Queue Entries: 1024 00:27:26.294 Contiguous Queues Required: No 00:27:26.294 Arbitration Mechanisms Supported 00:27:26.294 Weighted Round Robin: Not Supported 00:27:26.294 Vendor Specific: Not Supported 00:27:26.294 Reset Timeout: 7500 ms 00:27:26.294 Doorbell Stride: 4 bytes 00:27:26.294 NVM Subsystem Reset: Not Supported 00:27:26.294 Command Sets Supported 00:27:26.294 NVM Command Set: Supported 00:27:26.294 Boot Partition: Not Supported 00:27:26.295 Memory Page Size Minimum: 4096 bytes 00:27:26.295 Memory Page Size Maximum: 4096 bytes 00:27:26.295 Persistent Memory Region: Not Supported 00:27:26.295 Optional Asynchronous Events Supported 00:27:26.295 Namespace Attribute Notices: Not Supported 00:27:26.295 Firmware Activation Notices: Not Supported 00:27:26.295 ANA Change Notices: Not Supported 00:27:26.295 PLE Aggregate Log Change Notices: Not Supported 00:27:26.295 LBA Status Info Alert Notices: Not Supported 00:27:26.295 EGE Aggregate Log Change Notices: Not Supported 00:27:26.295 Normal NVM Subsystem Shutdown event: Not Supported 00:27:26.295 Zone Descriptor Change Notices: Not Supported 00:27:26.295 Discovery Log Change Notices: Supported 00:27:26.295 Controller Attributes 00:27:26.295 128-bit Host Identifier: Not Supported 00:27:26.295 Non-Operational Permissive Mode: Not Supported 00:27:26.295 NVM Sets: Not Supported 00:27:26.295 Read Recovery Levels: Not Supported 00:27:26.295 Endurance Groups: Not Supported 00:27:26.295 Predictable Latency Mode: Not Supported 00:27:26.295 Traffic Based Keep ALive: Not Supported 00:27:26.295 Namespace Granularity: Not Supported 00:27:26.295 SQ Associations: Not Supported 00:27:26.295 UUID List: Not Supported 00:27:26.295 Multi-Domain Subsystem: Not Supported 00:27:26.295 Fixed Capacity Management: Not Supported 00:27:26.295 Variable Capacity Management: Not Supported 00:27:26.295 Delete Endurance Group: Not Supported 00:27:26.295 Delete NVM Set: Not Supported 00:27:26.295 Extended LBA Formats Supported: Not Supported 00:27:26.295 Flexible Data Placement Supported: Not Supported 00:27:26.295 00:27:26.295 Controller Memory Buffer Support 00:27:26.295 ================================ 00:27:26.295 Supported: No 00:27:26.295 00:27:26.295 Persistent Memory Region Support 00:27:26.295 ================================ 00:27:26.295 Supported: No 00:27:26.295 00:27:26.295 Admin Command Set Attributes 00:27:26.295 ============================ 00:27:26.295 Security Send/Receive: Not Supported 00:27:26.295 Format NVM: Not Supported 00:27:26.295 Firmware Activate/Download: Not Supported 00:27:26.295 Namespace Management: Not Supported 00:27:26.295 Device Self-Test: Not Supported 00:27:26.295 Directives: Not Supported 00:27:26.295 NVMe-MI: Not Supported 00:27:26.295 Virtualization Management: Not Supported 00:27:26.295 Doorbell Buffer Config: Not Supported 00:27:26.295 Get LBA Status Capability: Not Supported 00:27:26.295 Command & Feature Lockdown Capability: Not Supported 00:27:26.295 Abort Command Limit: 1 00:27:26.295 Async Event Request Limit: 1 00:27:26.295 Number of Firmware Slots: N/A 00:27:26.295 Firmware Slot 1 Read-Only: N/A 00:27:26.295 Firmware Activation Without Reset: N/A 00:27:26.295 Multiple Update Detection Support: N/A 00:27:26.295 Firmware Update Granularity: No Information Provided 00:27:26.295 Per-Namespace SMART Log: No 00:27:26.295 Asymmetric Namespace Access Log Page: Not Supported 00:27:26.295 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:26.295 Command Effects Log Page: Not Supported 00:27:26.295 Get Log Page Extended Data: Supported 00:27:26.295 Telemetry Log Pages: Not Supported 00:27:26.295 Persistent Event Log Pages: Not Supported 00:27:26.295 Supported Log Pages Log Page: May Support 00:27:26.295 Commands Supported & Effects Log Page: Not Supported 00:27:26.295 Feature Identifiers & Effects Log Page:May Support 00:27:26.295 NVMe-MI Commands & Effects Log Page: May Support 00:27:26.295 Data Area 4 for Telemetry Log: Not Supported 00:27:26.295 Error Log Page Entries Supported: 1 00:27:26.295 Keep Alive: Not Supported 00:27:26.295 00:27:26.295 NVM Command Set Attributes 00:27:26.295 ========================== 00:27:26.295 Submission Queue Entry Size 00:27:26.295 Max: 1 00:27:26.295 Min: 1 00:27:26.295 Completion Queue Entry Size 00:27:26.295 Max: 1 00:27:26.295 Min: 1 00:27:26.295 Number of Namespaces: 0 00:27:26.295 Compare Command: Not Supported 00:27:26.295 Write Uncorrectable Command: Not Supported 00:27:26.295 Dataset Management Command: Not Supported 00:27:26.295 Write Zeroes Command: Not Supported 00:27:26.295 Set Features Save Field: Not Supported 00:27:26.295 Reservations: Not Supported 00:27:26.295 Timestamp: Not Supported 00:27:26.295 Copy: Not Supported 00:27:26.295 Volatile Write Cache: Not Present 00:27:26.295 Atomic Write Unit (Normal): 1 00:27:26.295 Atomic Write Unit (PFail): 1 00:27:26.295 Atomic Compare & Write Unit: 1 00:27:26.295 Fused Compare & Write: Not Supported 00:27:26.295 Scatter-Gather List 00:27:26.295 SGL Command Set: Supported 00:27:26.295 SGL Keyed: Not Supported 00:27:26.295 SGL Bit Bucket Descriptor: Not Supported 00:27:26.295 SGL Metadata Pointer: Not Supported 00:27:26.295 Oversized SGL: Not Supported 00:27:26.295 SGL Metadata Address: Not Supported 00:27:26.295 SGL Offset: Supported 00:27:26.295 Transport SGL Data Block: Not Supported 00:27:26.295 Replay Protected Memory Block: Not Supported 00:27:26.295 00:27:26.295 Firmware Slot Information 00:27:26.295 ========================= 00:27:26.295 Active slot: 0 00:27:26.295 00:27:26.295 00:27:26.295 Error Log 00:27:26.295 ========= 00:27:26.295 00:27:26.295 Active Namespaces 00:27:26.295 ================= 00:27:26.295 Discovery Log Page 00:27:26.295 ================== 00:27:26.295 Generation Counter: 2 00:27:26.295 Number of Records: 2 00:27:26.295 Record Format: 0 00:27:26.295 00:27:26.295 Discovery Log Entry 0 00:27:26.295 ---------------------- 00:27:26.295 Transport Type: 3 (TCP) 00:27:26.295 Address Family: 1 (IPv4) 00:27:26.295 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:26.295 Entry Flags: 00:27:26.295 Duplicate Returned Information: 0 00:27:26.295 Explicit Persistent Connection Support for Discovery: 0 00:27:26.295 Transport Requirements: 00:27:26.295 Secure Channel: Not Specified 00:27:26.295 Port ID: 1 (0x0001) 00:27:26.295 Controller ID: 65535 (0xffff) 00:27:26.295 Admin Max SQ Size: 32 00:27:26.295 Transport Service Identifier: 4420 00:27:26.295 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:26.295 Transport Address: 10.0.0.1 00:27:26.295 Discovery Log Entry 1 00:27:26.295 ---------------------- 00:27:26.295 Transport Type: 3 (TCP) 00:27:26.295 Address Family: 1 (IPv4) 00:27:26.295 Subsystem Type: 2 (NVM Subsystem) 00:27:26.295 Entry Flags: 00:27:26.295 Duplicate Returned Information: 0 00:27:26.295 Explicit Persistent Connection Support for Discovery: 0 00:27:26.295 Transport Requirements: 00:27:26.295 Secure Channel: Not Specified 00:27:26.295 Port ID: 1 (0x0001) 00:27:26.295 Controller ID: 65535 (0xffff) 00:27:26.295 Admin Max SQ Size: 32 00:27:26.295 Transport Service Identifier: 4420 00:27:26.295 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:26.295 Transport Address: 10.0.0.1 00:27:26.295 13:12:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:26.295 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.295 get_feature(0x01) failed 00:27:26.295 get_feature(0x02) failed 00:27:26.295 get_feature(0x04) failed 00:27:26.295 ===================================================== 00:27:26.295 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:26.295 ===================================================== 00:27:26.295 Controller Capabilities/Features 00:27:26.295 ================================ 00:27:26.295 Vendor ID: 0000 00:27:26.295 Subsystem Vendor ID: 0000 00:27:26.295 Serial Number: 557fa8abe741eb32e90b 00:27:26.295 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:26.295 Firmware Version: 6.7.0-68 00:27:26.295 Recommended Arb Burst: 6 00:27:26.295 IEEE OUI Identifier: 00 00 00 00:27:26.295 Multi-path I/O 00:27:26.295 May have multiple subsystem ports: Yes 00:27:26.295 May have multiple controllers: Yes 00:27:26.295 Associated with SR-IOV VF: No 00:27:26.295 Max Data Transfer Size: Unlimited 00:27:26.295 Max Number of Namespaces: 1024 00:27:26.295 Max Number of I/O Queues: 128 00:27:26.295 NVMe Specification Version (VS): 1.3 00:27:26.295 NVMe Specification Version (Identify): 1.3 00:27:26.295 Maximum Queue Entries: 1024 00:27:26.295 Contiguous Queues Required: No 00:27:26.295 Arbitration Mechanisms Supported 00:27:26.295 Weighted Round Robin: Not Supported 00:27:26.295 Vendor Specific: Not Supported 00:27:26.295 Reset Timeout: 7500 ms 00:27:26.295 Doorbell Stride: 4 bytes 00:27:26.295 NVM Subsystem Reset: Not Supported 00:27:26.295 Command Sets Supported 00:27:26.295 NVM Command Set: Supported 00:27:26.295 Boot Partition: Not Supported 00:27:26.295 Memory Page Size Minimum: 4096 bytes 00:27:26.295 Memory Page Size Maximum: 4096 bytes 00:27:26.295 Persistent Memory Region: Not Supported 00:27:26.295 Optional Asynchronous Events Supported 00:27:26.295 Namespace Attribute Notices: Supported 00:27:26.295 Firmware Activation Notices: Not Supported 00:27:26.295 ANA Change Notices: Supported 00:27:26.295 PLE Aggregate Log Change Notices: Not Supported 00:27:26.295 LBA Status Info Alert Notices: Not Supported 00:27:26.295 EGE Aggregate Log Change Notices: Not Supported 00:27:26.295 Normal NVM Subsystem Shutdown event: Not Supported 00:27:26.295 Zone Descriptor Change Notices: Not Supported 00:27:26.295 Discovery Log Change Notices: Not Supported 00:27:26.295 Controller Attributes 00:27:26.295 128-bit Host Identifier: Supported 00:27:26.296 Non-Operational Permissive Mode: Not Supported 00:27:26.296 NVM Sets: Not Supported 00:27:26.296 Read Recovery Levels: Not Supported 00:27:26.296 Endurance Groups: Not Supported 00:27:26.296 Predictable Latency Mode: Not Supported 00:27:26.296 Traffic Based Keep ALive: Supported 00:27:26.296 Namespace Granularity: Not Supported 00:27:26.296 SQ Associations: Not Supported 00:27:26.296 UUID List: Not Supported 00:27:26.296 Multi-Domain Subsystem: Not Supported 00:27:26.296 Fixed Capacity Management: Not Supported 00:27:26.296 Variable Capacity Management: Not Supported 00:27:26.296 Delete Endurance Group: Not Supported 00:27:26.296 Delete NVM Set: Not Supported 00:27:26.296 Extended LBA Formats Supported: Not Supported 00:27:26.296 Flexible Data Placement Supported: Not Supported 00:27:26.296 00:27:26.296 Controller Memory Buffer Support 00:27:26.296 ================================ 00:27:26.296 Supported: No 00:27:26.296 00:27:26.296 Persistent Memory Region Support 00:27:26.296 ================================ 00:27:26.296 Supported: No 00:27:26.296 00:27:26.296 Admin Command Set Attributes 00:27:26.296 ============================ 00:27:26.296 Security Send/Receive: Not Supported 00:27:26.296 Format NVM: Not Supported 00:27:26.296 Firmware Activate/Download: Not Supported 00:27:26.296 Namespace Management: Not Supported 00:27:26.296 Device Self-Test: Not Supported 00:27:26.296 Directives: Not Supported 00:27:26.296 NVMe-MI: Not Supported 00:27:26.296 Virtualization Management: Not Supported 00:27:26.296 Doorbell Buffer Config: Not Supported 00:27:26.296 Get LBA Status Capability: Not Supported 00:27:26.296 Command & Feature Lockdown Capability: Not Supported 00:27:26.296 Abort Command Limit: 4 00:27:26.296 Async Event Request Limit: 4 00:27:26.296 Number of Firmware Slots: N/A 00:27:26.296 Firmware Slot 1 Read-Only: N/A 00:27:26.296 Firmware Activation Without Reset: N/A 00:27:26.296 Multiple Update Detection Support: N/A 00:27:26.296 Firmware Update Granularity: No Information Provided 00:27:26.296 Per-Namespace SMART Log: Yes 00:27:26.296 Asymmetric Namespace Access Log Page: Supported 00:27:26.296 ANA Transition Time : 10 sec 00:27:26.296 00:27:26.296 Asymmetric Namespace Access Capabilities 00:27:26.296 ANA Optimized State : Supported 00:27:26.296 ANA Non-Optimized State : Supported 00:27:26.296 ANA Inaccessible State : Supported 00:27:26.296 ANA Persistent Loss State : Supported 00:27:26.296 ANA Change State : Supported 00:27:26.296 ANAGRPID is not changed : No 00:27:26.296 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:26.296 00:27:26.296 ANA Group Identifier Maximum : 128 00:27:26.296 Number of ANA Group Identifiers : 128 00:27:26.296 Max Number of Allowed Namespaces : 1024 00:27:26.296 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:26.296 Command Effects Log Page: Supported 00:27:26.296 Get Log Page Extended Data: Supported 00:27:26.296 Telemetry Log Pages: Not Supported 00:27:26.296 Persistent Event Log Pages: Not Supported 00:27:26.296 Supported Log Pages Log Page: May Support 00:27:26.296 Commands Supported & Effects Log Page: Not Supported 00:27:26.296 Feature Identifiers & Effects Log Page:May Support 00:27:26.296 NVMe-MI Commands & Effects Log Page: May Support 00:27:26.296 Data Area 4 for Telemetry Log: Not Supported 00:27:26.296 Error Log Page Entries Supported: 128 00:27:26.296 Keep Alive: Supported 00:27:26.296 Keep Alive Granularity: 1000 ms 00:27:26.296 00:27:26.296 NVM Command Set Attributes 00:27:26.296 ========================== 00:27:26.296 Submission Queue Entry Size 00:27:26.296 Max: 64 00:27:26.296 Min: 64 00:27:26.296 Completion Queue Entry Size 00:27:26.296 Max: 16 00:27:26.296 Min: 16 00:27:26.296 Number of Namespaces: 1024 00:27:26.296 Compare Command: Not Supported 00:27:26.296 Write Uncorrectable Command: Not Supported 00:27:26.296 Dataset Management Command: Supported 00:27:26.296 Write Zeroes Command: Supported 00:27:26.296 Set Features Save Field: Not Supported 00:27:26.296 Reservations: Not Supported 00:27:26.296 Timestamp: Not Supported 00:27:26.296 Copy: Not Supported 00:27:26.296 Volatile Write Cache: Present 00:27:26.296 Atomic Write Unit (Normal): 1 00:27:26.296 Atomic Write Unit (PFail): 1 00:27:26.296 Atomic Compare & Write Unit: 1 00:27:26.296 Fused Compare & Write: Not Supported 00:27:26.296 Scatter-Gather List 00:27:26.296 SGL Command Set: Supported 00:27:26.296 SGL Keyed: Not Supported 00:27:26.296 SGL Bit Bucket Descriptor: Not Supported 00:27:26.296 SGL Metadata Pointer: Not Supported 00:27:26.296 Oversized SGL: Not Supported 00:27:26.296 SGL Metadata Address: Not Supported 00:27:26.296 SGL Offset: Supported 00:27:26.296 Transport SGL Data Block: Not Supported 00:27:26.296 Replay Protected Memory Block: Not Supported 00:27:26.296 00:27:26.296 Firmware Slot Information 00:27:26.296 ========================= 00:27:26.296 Active slot: 0 00:27:26.296 00:27:26.296 Asymmetric Namespace Access 00:27:26.296 =========================== 00:27:26.296 Change Count : 0 00:27:26.296 Number of ANA Group Descriptors : 1 00:27:26.296 ANA Group Descriptor : 0 00:27:26.296 ANA Group ID : 1 00:27:26.296 Number of NSID Values : 1 00:27:26.296 Change Count : 0 00:27:26.296 ANA State : 1 00:27:26.296 Namespace Identifier : 1 00:27:26.296 00:27:26.296 Commands Supported and Effects 00:27:26.296 ============================== 00:27:26.296 Admin Commands 00:27:26.296 -------------- 00:27:26.296 Get Log Page (02h): Supported 00:27:26.296 Identify (06h): Supported 00:27:26.296 Abort (08h): Supported 00:27:26.296 Set Features (09h): Supported 00:27:26.296 Get Features (0Ah): Supported 00:27:26.296 Asynchronous Event Request (0Ch): Supported 00:27:26.296 Keep Alive (18h): Supported 00:27:26.296 I/O Commands 00:27:26.296 ------------ 00:27:26.296 Flush (00h): Supported 00:27:26.296 Write (01h): Supported LBA-Change 00:27:26.296 Read (02h): Supported 00:27:26.296 Write Zeroes (08h): Supported LBA-Change 00:27:26.296 Dataset Management (09h): Supported 00:27:26.296 00:27:26.296 Error Log 00:27:26.296 ========= 00:27:26.296 Entry: 0 00:27:26.296 Error Count: 0x3 00:27:26.296 Submission Queue Id: 0x0 00:27:26.296 Command Id: 0x5 00:27:26.296 Phase Bit: 0 00:27:26.296 Status Code: 0x2 00:27:26.296 Status Code Type: 0x0 00:27:26.296 Do Not Retry: 1 00:27:26.296 Error Location: 0x28 00:27:26.296 LBA: 0x0 00:27:26.296 Namespace: 0x0 00:27:26.296 Vendor Log Page: 0x0 00:27:26.296 ----------- 00:27:26.296 Entry: 1 00:27:26.296 Error Count: 0x2 00:27:26.296 Submission Queue Id: 0x0 00:27:26.296 Command Id: 0x5 00:27:26.296 Phase Bit: 0 00:27:26.296 Status Code: 0x2 00:27:26.296 Status Code Type: 0x0 00:27:26.296 Do Not Retry: 1 00:27:26.296 Error Location: 0x28 00:27:26.296 LBA: 0x0 00:27:26.296 Namespace: 0x0 00:27:26.296 Vendor Log Page: 0x0 00:27:26.296 ----------- 00:27:26.296 Entry: 2 00:27:26.296 Error Count: 0x1 00:27:26.296 Submission Queue Id: 0x0 00:27:26.296 Command Id: 0x4 00:27:26.296 Phase Bit: 0 00:27:26.296 Status Code: 0x2 00:27:26.296 Status Code Type: 0x0 00:27:26.296 Do Not Retry: 1 00:27:26.296 Error Location: 0x28 00:27:26.296 LBA: 0x0 00:27:26.296 Namespace: 0x0 00:27:26.296 Vendor Log Page: 0x0 00:27:26.296 00:27:26.296 Number of Queues 00:27:26.296 ================ 00:27:26.296 Number of I/O Submission Queues: 128 00:27:26.296 Number of I/O Completion Queues: 128 00:27:26.296 00:27:26.296 ZNS Specific Controller Data 00:27:26.296 ============================ 00:27:26.296 Zone Append Size Limit: 0 00:27:26.296 00:27:26.296 00:27:26.296 Active Namespaces 00:27:26.296 ================= 00:27:26.296 get_feature(0x05) failed 00:27:26.296 Namespace ID:1 00:27:26.296 Command Set Identifier: NVM (00h) 00:27:26.296 Deallocate: Supported 00:27:26.296 Deallocated/Unwritten Error: Not Supported 00:27:26.296 Deallocated Read Value: Unknown 00:27:26.296 Deallocate in Write Zeroes: Not Supported 00:27:26.296 Deallocated Guard Field: 0xFFFF 00:27:26.296 Flush: Supported 00:27:26.296 Reservation: Not Supported 00:27:26.296 Namespace Sharing Capabilities: Multiple Controllers 00:27:26.296 Size (in LBAs): 3750748848 (1788GiB) 00:27:26.296 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:26.296 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:26.296 UUID: 3c34c559-dc62-4924-b856-d7a52de30f88 00:27:26.296 Thin Provisioning: Not Supported 00:27:26.296 Per-NS Atomic Units: Yes 00:27:26.296 Atomic Write Unit (Normal): 8 00:27:26.296 Atomic Write Unit (PFail): 8 00:27:26.296 Preferred Write Granularity: 8 00:27:26.296 Atomic Compare & Write Unit: 8 00:27:26.296 Atomic Boundary Size (Normal): 0 00:27:26.296 Atomic Boundary Size (PFail): 0 00:27:26.296 Atomic Boundary Offset: 0 00:27:26.296 NGUID/EUI64 Never Reused: No 00:27:26.296 ANA group ID: 1 00:27:26.296 Namespace Write Protected: No 00:27:26.296 Number of LBA Formats: 1 00:27:26.296 Current LBA Format: LBA Format #00 00:27:26.296 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:26.296 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:26.297 rmmod nvme_tcp 00:27:26.297 rmmod nvme_fabrics 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.297 13:12:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:28.840 13:12:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:33.071 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:33.071 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:33.071 00:27:33.071 real 0m19.842s 00:27:33.071 user 0m5.473s 00:27:33.071 sys 0m11.439s 00:27:33.071 13:12:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.071 13:12:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:33.071 ************************************ 00:27:33.071 END TEST nvmf_identify_kernel_target 00:27:33.071 ************************************ 00:27:33.071 13:12:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:33.071 13:12:54 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:33.071 13:12:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.071 13:12:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.071 13:12:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.071 ************************************ 00:27:33.071 START TEST nvmf_auth_host 00:27:33.071 ************************************ 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:33.071 * Looking for test storage... 00:27:33.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.071 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.072 13:12:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:41.211 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:41.211 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:41.211 Found net devices under 0000:31:00.0: cvl_0_0 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.211 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:41.212 Found net devices under 0000:31:00.1: cvl_0_1 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:41.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.838 ms 00:27:41.212 00:27:41.212 --- 10.0.0.2 ping statistics --- 00:27:41.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.212 rtt min/avg/max/mdev = 0.838/0.838/0.838/0.000 ms 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:27:41.212 00:27:41.212 --- 10.0.0.1 ping statistics --- 00:27:41.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.212 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=851044 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 851044 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 851044 ']' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:41.212 13:13:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5111fbbc042d14e479872f89e0008b9f 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EVi 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5111fbbc042d14e479872f89e0008b9f 0 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5111fbbc042d14e479872f89e0008b9f 0 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5111fbbc042d14e479872f89e0008b9f 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EVi 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EVi 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EVi 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=779506716d638e73c879d1ec9987c5e39e31a9f23f23ca8468ba1c819811b570 00:27:41.783 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OAr 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 779506716d638e73c879d1ec9987c5e39e31a9f23f23ca8468ba1c819811b570 3 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 779506716d638e73c879d1ec9987c5e39e31a9f23f23ca8468ba1c819811b570 3 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=779506716d638e73c879d1ec9987c5e39e31a9f23f23ca8468ba1c819811b570 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OAr 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OAr 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.OAr 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7c6c7579cd0e9030d36c208be0b95258b658f819b2d1f4a0 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.14O 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7c6c7579cd0e9030d36c208be0b95258b658f819b2d1f4a0 0 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7c6c7579cd0e9030d36c208be0b95258b658f819b2d1f4a0 0 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7c6c7579cd0e9030d36c208be0b95258b658f819b2d1f4a0 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.14O 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.14O 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.14O 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90fc5c68617a73cce57b039fb3d24b4c9f1703c1f687b711 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xKk 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90fc5c68617a73cce57b039fb3d24b4c9f1703c1f687b711 2 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90fc5c68617a73cce57b039fb3d24b4c9f1703c1f687b711 2 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90fc5c68617a73cce57b039fb3d24b4c9f1703c1f687b711 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xKk 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xKk 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xKk 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c99ef57f78a2fa8266a387387310ea1d 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FWf 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c99ef57f78a2fa8266a387387310ea1d 1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c99ef57f78a2fa8266a387387310ea1d 1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c99ef57f78a2fa8266a387387310ea1d 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FWf 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FWf 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FWf 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f72bd8710e084f1ca0e87116ea544089 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8wo 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f72bd8710e084f1ca0e87116ea544089 1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f72bd8710e084f1ca0e87116ea544089 1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f72bd8710e084f1ca0e87116ea544089 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:42.045 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8wo 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8wo 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.8wo 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b7c77009842d7ad973fccebebb9f65817506b696b436b90 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sch 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b7c77009842d7ad973fccebebb9f65817506b696b436b90 2 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b7c77009842d7ad973fccebebb9f65817506b696b436b90 2 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b7c77009842d7ad973fccebebb9f65817506b696b436b90 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sch 00:27:42.305 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sch 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Sch 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bbe06b7cedb1d9efa619eff590803a86 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ozW 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bbe06b7cedb1d9efa619eff590803a86 0 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bbe06b7cedb1d9efa619eff590803a86 0 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bbe06b7cedb1d9efa619eff590803a86 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:42.306 13:13:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ozW 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ozW 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ozW 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=719f300244ed84b0e70233ca9fdb9e17dcaa7c4d0066705854d03cae178bef30 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LHh 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 719f300244ed84b0e70233ca9fdb9e17dcaa7c4d0066705854d03cae178bef30 3 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 719f300244ed84b0e70233ca9fdb9e17dcaa7c4d0066705854d03cae178bef30 3 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=719f300244ed84b0e70233ca9fdb9e17dcaa7c4d0066705854d03cae178bef30 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LHh 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LHh 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.LHh 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 851044 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 851044 ']' 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.306 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EVi 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.OAr ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OAr 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.14O 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xKk ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xKk 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FWf 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.8wo ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8wo 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.566 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Sch 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ozW ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ozW 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LHh 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:42.567 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:42.827 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:42.827 13:13:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:47.029 Waiting for block devices as requested 00:27:47.029 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:47.029 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:47.029 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:47.290 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:47.290 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:47.290 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:47.551 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:47.551 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:47.551 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:47.551 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:48.493 13:13:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:48.493 No valid GPT data, bailing 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.493 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:48.494 00:27:48.494 Discovery Log Number of Records 2, Generation counter 2 00:27:48.494 =====Discovery Log Entry 0====== 00:27:48.494 trtype: tcp 00:27:48.494 adrfam: ipv4 00:27:48.494 subtype: current discovery subsystem 00:27:48.494 treq: not specified, sq flow control disable supported 00:27:48.494 portid: 1 00:27:48.494 trsvcid: 4420 00:27:48.494 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:48.494 traddr: 10.0.0.1 00:27:48.494 eflags: none 00:27:48.494 sectype: none 00:27:48.494 =====Discovery Log Entry 1====== 00:27:48.494 trtype: tcp 00:27:48.494 adrfam: ipv4 00:27:48.494 subtype: nvme subsystem 00:27:48.494 treq: not specified, sq flow control disable supported 00:27:48.494 portid: 1 00:27:48.494 trsvcid: 4420 00:27:48.494 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:48.494 traddr: 10.0.0.1 00:27:48.494 eflags: none 00:27:48.494 sectype: none 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.494 nvme0n1 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.494 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 nvme0n1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.756 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.018 nvme0n1 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.018 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.279 nvme0n1 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.279 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.280 13:13:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.280 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 nvme0n1 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 nvme0n1 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.585 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.847 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.848 nvme0n1 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.848 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.109 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 nvme0n1 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.110 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.370 13:13:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 nvme0n1 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.370 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.632 nvme0n1 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.632 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.894 nvme0n1 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.894 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.155 nvme0n1 00:27:51.155 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.155 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.155 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.155 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.155 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.415 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.415 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.415 13:13:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.415 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.415 13:13:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.415 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.415 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.415 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.416 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.676 nvme0n1 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.676 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.677 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 nvme0n1 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.937 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.198 nvme0n1 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.198 13:13:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.198 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.458 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.459 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.459 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.459 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.459 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.719 nvme0n1 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.719 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.291 nvme0n1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.291 13:13:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.552 nvme0n1 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.552 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.813 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.074 nvme0n1 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.074 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.335 13:13:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.336 13:13:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.336 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.336 13:13:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.596 nvme0n1 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.596 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.856 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.857 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.117 nvme0n1 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.117 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.378 13:13:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.378 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.949 nvme0n1 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.949 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.210 13:13:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.840 nvme0n1 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.840 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.841 13:13:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.787 nvme0n1 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.787 13:13:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.359 nvme0n1 00:27:58.359 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.620 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.193 nvme0n1 00:27:59.193 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.193 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.193 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.193 13:13:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.193 13:13:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.193 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.454 nvme0n1 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.454 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.716 nvme0n1 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.716 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.717 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.978 nvme0n1 00:27:59.978 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.978 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.978 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.979 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.240 nvme0n1 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.240 13:13:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.502 nvme0n1 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.502 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.764 nvme0n1 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.764 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.765 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 nvme0n1 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:01.026 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.027 nvme0n1 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.027 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.288 13:13:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.289 nvme0n1 00:28:01.289 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.289 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.289 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.289 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.289 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.555 nvme0n1 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.555 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.556 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.556 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.556 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.817 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.818 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.078 nvme0n1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.078 13:13:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.079 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.079 13:13:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.339 nvme0n1 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.339 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.601 nvme0n1 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.601 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.862 nvme0n1 00:28:02.862 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.862 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.863 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.863 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.124 13:13:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.386 nvme0n1 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.386 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.959 nvme0n1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.959 13:13:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.221 nvme0n1 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.221 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.481 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.482 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.742 nvme0n1 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.742 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.003 13:13:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.263 nvme0n1 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.263 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.523 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.784 nvme0n1 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.784 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.045 13:13:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 nvme0n1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.617 13:13:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.560 nvme0n1 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.560 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.502 nvme0n1 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.502 13:13:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.502 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.073 nvme0n1 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.073 13:13:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 nvme0n1 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.014 nvme0n1 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.014 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.015 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 13:13:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 nvme0n1 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.275 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 nvme0n1 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.536 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.795 nvme0n1 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.795 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.796 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.055 nvme0n1 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.055 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.315 nvme0n1 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.315 13:13:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.315 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.575 nvme0n1 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.575 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.835 nvme0n1 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.835 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.095 nvme0n1 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.095 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.355 nvme0n1 00:28:12.355 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.355 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.355 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.355 13:13:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.355 13:13:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.355 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.356 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.616 nvme0n1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.616 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 nvme0n1 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.877 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.138 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.398 nvme0n1 00:28:13.398 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.398 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.398 13:13:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.398 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.398 13:13:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.398 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.658 nvme0n1 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.658 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.659 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 nvme0n1 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.920 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.180 13:13:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.441 nvme0n1 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.441 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.703 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.964 nvme0n1 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.964 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.225 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.226 13:13:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.486 nvme0n1 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.486 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.747 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.009 nvme0n1 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.009 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.270 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.271 13:13:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.531 nvme0n1 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.531 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTExMWZiYmMwNDJkMTRlNDc5ODcyZjg5ZTAwMDhiOWY4CyJg: 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Nzc5NTA2NzE2ZDYzOGU3M2M4NzlkMWVjOTk4N2M1ZTM5ZTMxYTlmMjNmMjNjYTg0NjhiYTFjODE5ODExYjU3MKGrRNc=: 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.792 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.793 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.793 13:13:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.793 13:13:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.793 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.793 13:13:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 nvme0n1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.307 nvme0n1 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Yzk5ZWY1N2Y3OGEyZmE4MjY2YTM4NzM4NzMxMGVhMWRqjuwz: 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjcyYmQ4NzEwZTA4NGYxY2EwZTg3MTE2ZWE1NDQwODkqL2dc: 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.307 13:13:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.252 nvme0n1 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWI3Yzc3MDA5ODQyZDdhZDk3M2ZjY2ViZWJiOWY2NTgxNzUwNmI2OTZiNDM2Yjkw6Gl5cQ==: 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmJlMDZiN2NlZGIxZDllZmE2MTllZmY1OTA4MDNhODZraF0F: 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.252 13:13:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.824 nvme0n1 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzE5ZjMwMDI0NGVkODRiMGU3MDIzM2NhOWZkYjllMTdkY2FhN2M0ZDAwNjY3MDU4NTRkMDNjYWUxNzhiZWYzMDU17nw=: 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.824 13:13:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 nvme0n1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2M2Yzc1NzljZDBlOTAzMGQzNmMyMDhiZTBiOTUyNThiNjU4ZjgxOWIyZDFmNGEwakelOA==: 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTBmYzVjNjg2MTdhNzNjY2U1N2IwMzlmYjNkMjRiNGM5ZjE3MDNjMWY2ODdiNzExBt9nfQ==: 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 request: 00:28:20.769 { 00:28:20.769 "name": "nvme0", 00:28:20.769 "trtype": "tcp", 00:28:20.769 "traddr": "10.0.0.1", 00:28:20.769 "adrfam": "ipv4", 00:28:20.769 "trsvcid": "4420", 00:28:20.769 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:20.769 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:20.769 "prchk_reftag": false, 00:28:20.769 "prchk_guard": false, 00:28:20.769 "hdgst": false, 00:28:20.769 "ddgst": false, 00:28:20.769 "method": "bdev_nvme_attach_controller", 00:28:20.769 "req_id": 1 00:28:20.769 } 00:28:20.769 Got JSON-RPC error response 00:28:20.769 response: 00:28:20.769 { 00:28:20.769 "code": -5, 00:28:20.769 "message": "Input/output error" 00:28:20.769 } 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.769 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.770 request: 00:28:20.770 { 00:28:20.770 "name": "nvme0", 00:28:20.770 "trtype": "tcp", 00:28:20.770 "traddr": "10.0.0.1", 00:28:20.770 "adrfam": "ipv4", 00:28:20.770 "trsvcid": "4420", 00:28:20.770 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:20.770 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:20.770 "prchk_reftag": false, 00:28:20.770 "prchk_guard": false, 00:28:20.770 "hdgst": false, 00:28:20.770 "ddgst": false, 00:28:20.770 "dhchap_key": "key2", 00:28:20.770 "method": "bdev_nvme_attach_controller", 00:28:20.770 "req_id": 1 00:28:20.770 } 00:28:20.770 Got JSON-RPC error response 00:28:20.770 response: 00:28:20.770 { 00:28:20.770 "code": -5, 00:28:20.770 "message": "Input/output error" 00:28:20.770 } 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.770 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.031 request: 00:28:21.031 { 00:28:21.031 "name": "nvme0", 00:28:21.031 "trtype": "tcp", 00:28:21.031 "traddr": "10.0.0.1", 00:28:21.031 "adrfam": "ipv4", 00:28:21.031 "trsvcid": "4420", 00:28:21.031 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:21.031 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:21.031 "prchk_reftag": false, 00:28:21.031 "prchk_guard": false, 00:28:21.031 "hdgst": false, 00:28:21.031 "ddgst": false, 00:28:21.031 "dhchap_key": "key1", 00:28:21.031 "dhchap_ctrlr_key": "ckey2", 00:28:21.031 "method": "bdev_nvme_attach_controller", 00:28:21.031 "req_id": 1 00:28:21.031 } 00:28:21.031 Got JSON-RPC error response 00:28:21.031 response: 00:28:21.031 { 00:28:21.031 "code": -5, 00:28:21.031 "message": "Input/output error" 00:28:21.031 } 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:21.031 rmmod nvme_tcp 00:28:21.031 rmmod nvme_fabrics 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 851044 ']' 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 851044 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 851044 ']' 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 851044 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 851044 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 851044' 00:28:21.031 killing process with pid 851044 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 851044 00:28:21.031 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 851044 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:21.292 13:13:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:23.207 13:13:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:23.207 13:13:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:27.515 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:27.515 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:27.515 13:13:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EVi /tmp/spdk.key-null.14O /tmp/spdk.key-sha256.FWf /tmp/spdk.key-sha384.Sch /tmp/spdk.key-sha512.LHh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:27.515 13:13:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:31.726 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:31.726 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:31.726 00:28:31.726 real 0m58.698s 00:28:31.726 user 0m51.214s 00:28:31.726 sys 0m16.303s 00:28:31.726 13:13:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.726 13:13:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.726 ************************************ 00:28:31.726 END TEST nvmf_auth_host 00:28:31.726 ************************************ 00:28:31.726 13:13:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:31.726 13:13:53 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:31.726 13:13:53 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.726 13:13:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:31.726 13:13:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.726 13:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:31.726 ************************************ 00:28:31.726 START TEST nvmf_digest 00:28:31.726 ************************************ 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.726 * Looking for test storage... 00:28:31.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.726 13:13:53 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:31.727 13:13:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:39.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:39.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:39.874 Found net devices under 0000:31:00.0: cvl_0_0 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:39.874 Found net devices under 0000:31:00.1: cvl_0_1 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.874 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:28:39.875 00:28:39.875 --- 10.0.0.2 ping statistics --- 00:28:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.875 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:28:39.875 00:28:39.875 --- 10.0.0.1 ping statistics --- 00:28:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.875 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:39.875 13:14:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:40.136 ************************************ 00:28:40.136 START TEST nvmf_digest_clean 00:28:40.136 ************************************ 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=868362 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 868362 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 868362 ']' 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.136 13:14:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:40.136 [2024-07-15 13:14:01.772718] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:40.136 [2024-07-15 13:14:01.772776] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.136 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.136 [2024-07-15 13:14:01.854101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.136 [2024-07-15 13:14:01.927631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.136 [2024-07-15 13:14:01.927670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.136 [2024-07-15 13:14:01.927678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:40.136 [2024-07-15 13:14:01.927685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:40.136 [2024-07-15 13:14:01.927691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.136 [2024-07-15 13:14:01.927710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.082 null0 00:28:41.082 [2024-07-15 13:14:02.654358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.082 [2024-07-15 13:14:02.678530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=868706 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 868706 /var/tmp/bperf.sock 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 868706 ']' 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.082 13:14:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.083 [2024-07-15 13:14:02.734744] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:41.083 [2024-07-15 13:14:02.734791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid868706 ] 00:28:41.083 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.083 [2024-07-15 13:14:02.816209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.083 [2024-07-15 13:14:02.881302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.024 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.286 nvme0n1 00:28:42.286 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.286 13:14:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.286 Running I/O for 2 seconds... 00:28:44.202 00:28:44.202 Latency(us) 00:28:44.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.202 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:44.202 nvme0n1 : 2.00 20186.85 78.85 0.00 0.00 6331.55 3140.27 19333.12 00:28:44.202 =================================================================================================================== 00:28:44.202 Total : 20186.85 78.85 0.00 0.00 6331.55 3140.27 19333.12 00:28:44.202 0 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:44.463 | select(.opcode=="crc32c") 00:28:44.463 | "\(.module_name) \(.executed)"' 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 868706 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 868706 ']' 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 868706 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868706 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868706' 00:28:44.463 killing process with pid 868706 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 868706 00:28:44.463 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.463 00:28:44.463 Latency(us) 00:28:44.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.463 =================================================================================================================== 00:28:44.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.463 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 868706 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=869395 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 869395 /var/tmp/bperf.sock 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 869395 ']' 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:44.724 13:14:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:44.724 [2024-07-15 13:14:06.412098] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:44.724 [2024-07-15 13:14:06.412156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869395 ] 00:28:44.724 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.724 Zero copy mechanism will not be used. 00:28:44.724 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.724 [2024-07-15 13:14:06.493201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.724 [2024-07-15 13:14:06.546482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.667 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.928 nvme0n1 00:28:45.928 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:45.928 13:14:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:45.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.928 Zero copy mechanism will not be used. 00:28:45.928 Running I/O for 2 seconds... 00:28:48.475 00:28:48.475 Latency(us) 00:28:48.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.475 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:48.475 nvme0n1 : 2.00 3005.42 375.68 0.00 0.00 5320.30 1303.89 13871.79 00:28:48.475 =================================================================================================================== 00:28:48.475 Total : 3005.42 375.68 0.00 0.00 5320.30 1303.89 13871.79 00:28:48.475 0 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:48.475 | select(.opcode=="crc32c") 00:28:48.475 | "\(.module_name) \(.executed)"' 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 869395 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 869395 ']' 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 869395 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 869395 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 869395' 00:28:48.475 killing process with pid 869395 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 869395 00:28:48.475 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.475 00:28:48.475 Latency(us) 00:28:48.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.475 =================================================================================================================== 00:28:48.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.475 13:14:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 869395 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=870072 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 870072 /var/tmp/bperf.sock 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 870072 ']' 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:48.475 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.475 [2024-07-15 13:14:10.110330] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:48.476 [2024-07-15 13:14:10.110384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870072 ] 00:28:48.476 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.476 [2024-07-15 13:14:10.193045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.476 [2024-07-15 13:14:10.246604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.419 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:49.419 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:49.419 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:49.419 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:49.419 13:14:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:49.419 13:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.419 13:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.679 nvme0n1 00:28:49.679 13:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:49.679 13:14:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.679 Running I/O for 2 seconds... 00:28:52.221 00:28:52.221 Latency(us) 00:28:52.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.221 nvme0n1 : 2.00 22128.23 86.44 0.00 0.00 5776.72 2239.15 11141.12 00:28:52.221 =================================================================================================================== 00:28:52.221 Total : 22128.23 86.44 0.00 0.00 5776.72 2239.15 11141.12 00:28:52.221 0 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.221 | select(.opcode=="crc32c") 00:28:52.221 | "\(.module_name) \(.executed)"' 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 870072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 870072 ']' 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 870072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870072' 00:28:52.221 killing process with pid 870072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 870072 00:28:52.221 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.221 00:28:52.221 Latency(us) 00:28:52.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.221 =================================================================================================================== 00:28:52.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 870072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=870762 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 870762 /var/tmp/bperf.sock 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 870762 ']' 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.221 13:14:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:52.221 [2024-07-15 13:14:13.858287] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:52.221 [2024-07-15 13:14:13.858357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid870762 ] 00:28:52.221 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.221 Zero copy mechanism will not be used. 00:28:52.221 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.221 [2024-07-15 13:14:13.947172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.221 [2024-07-15 13:14:14.000359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.793 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.793 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:52.793 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:53.065 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:53.065 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:53.065 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.065 13:14:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.325 nvme0n1 00:28:53.325 13:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:53.325 13:14:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.586 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.586 Zero copy mechanism will not be used. 00:28:53.586 Running I/O for 2 seconds... 00:28:55.496 00:28:55.496 Latency(us) 00:28:55.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.496 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:55.496 nvme0n1 : 2.00 3514.40 439.30 0.00 0.00 4545.75 1788.59 14090.24 00:28:55.496 =================================================================================================================== 00:28:55.496 Total : 3514.40 439.30 0.00 0.00 4545.75 1788.59 14090.24 00:28:55.496 0 00:28:55.496 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:55.496 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:55.496 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:55.496 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:55.497 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:55.497 | select(.opcode=="crc32c") 00:28:55.497 | "\(.module_name) \(.executed)"' 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 870762 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 870762 ']' 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 870762 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870762 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870762' 00:28:55.757 killing process with pid 870762 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 870762 00:28:55.757 Received shutdown signal, test time was about 2.000000 seconds 00:28:55.757 00:28:55.757 Latency(us) 00:28:55.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.757 =================================================================================================================== 00:28:55.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 870762 00:28:55.757 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 868362 00:28:55.758 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 868362 ']' 00:28:55.758 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 868362 00:28:55.758 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:55.758 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.758 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868362 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868362' 00:28:56.019 killing process with pid 868362 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 868362 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 868362 00:28:56.019 00:28:56.019 real 0m16.026s 00:28:56.019 user 0m31.470s 00:28:56.019 sys 0m3.307s 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.019 ************************************ 00:28:56.019 END TEST nvmf_digest_clean 00:28:56.019 ************************************ 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:56.019 ************************************ 00:28:56.019 START TEST nvmf_digest_error 00:28:56.019 ************************************ 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=871486 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 871486 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 871486 ']' 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:56.019 13:14:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.280 [2024-07-15 13:14:17.870713] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:56.280 [2024-07-15 13:14:17.870768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.280 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.280 [2024-07-15 13:14:17.948325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.280 [2024-07-15 13:14:18.021787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.280 [2024-07-15 13:14:18.021828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.280 [2024-07-15 13:14:18.021835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.280 [2024-07-15 13:14:18.021841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.280 [2024-07-15 13:14:18.021847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.280 [2024-07-15 13:14:18.021872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.853 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 [2024-07-15 13:14:18.679782] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 null0 00:28:57.115 [2024-07-15 13:14:18.760628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.115 [2024-07-15 13:14:18.784807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=871818 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 871818 /var/tmp/bperf.sock 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 871818 ']' 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.115 13:14:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.115 [2024-07-15 13:14:18.840438] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:28:57.115 [2024-07-15 13:14:18.840485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871818 ] 00:28:57.115 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.115 [2024-07-15 13:14:18.918441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.376 [2024-07-15 13:14:18.973025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.949 13:14:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.520 nvme0n1 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:58.521 13:14:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.521 Running I/O for 2 seconds... 00:28:58.521 [2024-07-15 13:14:20.282408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.282439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.282448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.521 [2024-07-15 13:14:20.294882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.294901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.521 [2024-07-15 13:14:20.307968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.307986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.307993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.521 [2024-07-15 13:14:20.319847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.319864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.319871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.521 [2024-07-15 13:14:20.331713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.331730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.331736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.521 [2024-07-15 13:14:20.344360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.521 [2024-07-15 13:14:20.344378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.521 [2024-07-15 13:14:20.344384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.782 [2024-07-15 13:14:20.356369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.782 [2024-07-15 13:14:20.356387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.782 [2024-07-15 13:14:20.356399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.782 [2024-07-15 13:14:20.368026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.782 [2024-07-15 13:14:20.368044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.368050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.380108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.380126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.380132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.392844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.392861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.404734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.404751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.404758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.418577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.418594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.418601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.429885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.429902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.429908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.442273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.442290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.442296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.454147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.454165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.454171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.467117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.467137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.479069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.479086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.479092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.491436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.491453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.505579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.505595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.505601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.515063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.515080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.515086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.528374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.528391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.528397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.541336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.541353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.541359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.555551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.555568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.555574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.566151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.566168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.566174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.577006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.577023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.590007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.590024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.590031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.783 [2024-07-15 13:14:20.602749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:58.783 [2024-07-15 13:14:20.602766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.783 [2024-07-15 13:14:20.602773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.616366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.616383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.616390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.627854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.627871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.627877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.639945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.639968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.652354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.652371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.652377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.665430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.665447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.665453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.678743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.678759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.678769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.690016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.045 [2024-07-15 13:14:20.690033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.045 [2024-07-15 13:14:20.690039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.045 [2024-07-15 13:14:20.702006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.702023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.702029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.716161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.716178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.716185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.727467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.727483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.727489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.739093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.739110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.739116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.751701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.751717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.751723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.762482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.762499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.762505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.775659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.775675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.775681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.788386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.788405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.788411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.800446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.800462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.800468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.812972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.812988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.825631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.825648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.825654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.838042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.838059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.849415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.849432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.849438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.046 [2024-07-15 13:14:20.861873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.046 [2024-07-15 13:14:20.861891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.046 [2024-07-15 13:14:20.861897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.873927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.873944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.873950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.886881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.886897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.886903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.900001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.900018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.900024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.911393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.911409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.911415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.923387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.923403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.923410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.936260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.936276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.936282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.949174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.949191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.949197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.961303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.961320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.961326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.971339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.971355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.971361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.985871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.985889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.985895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:20.998741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:20.998763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:20.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.010536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.010552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.010558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.022641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.022659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.022665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.035023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.035040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.035046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.047435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.047452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.047458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.058677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.058694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.058700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.071912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.071930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.083359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.083377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.083382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.097117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.097134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.097140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.110029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.110045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.110051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.308 [2024-07-15 13:14:21.121580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.308 [2024-07-15 13:14:21.121597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.308 [2024-07-15 13:14:21.121603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.133227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.133246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.133253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.145932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.145949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.145955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.156880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.156897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.156903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.170291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.170307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.170314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.182898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.182915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.182921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.195400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.195416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.195422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.207158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.570 [2024-07-15 13:14:21.207175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.570 [2024-07-15 13:14:21.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.570 [2024-07-15 13:14:21.219567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.219584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.219590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.231689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.231705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.231712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.245531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.245547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.245553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.258073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.258090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.258096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.270012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.270028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.270034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.281078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.281094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.281100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.292616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.292633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.292639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.306665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.306681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.306688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.319873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.319892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.319898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.329333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.329351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.329357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.342460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.342478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.342484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.355501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.355518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.355524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.368868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.368885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.368891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.381855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.381872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.381879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.571 [2024-07-15 13:14:21.394640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.571 [2024-07-15 13:14:21.394656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.571 [2024-07-15 13:14:21.394662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.405653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.405670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.848 [2024-07-15 13:14:21.405676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.419039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.419056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.848 [2024-07-15 13:14:21.419063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.430877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.430894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.848 [2024-07-15 13:14:21.430900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.443098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.443115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.848 [2024-07-15 13:14:21.443121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.455674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.455691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.848 [2024-07-15 13:14:21.455697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.848 [2024-07-15 13:14:21.467748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.848 [2024-07-15 13:14:21.467765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.467771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.480373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.480390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.480396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.492911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.492928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.492934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.503661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.503678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.503684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.517284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.517301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.517308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.529951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.529969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.529978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.542560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.542577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.542583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.555243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.555261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.555267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.565956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.565973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.565979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.578878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.578895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.578901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.591311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.591328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.591334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.602991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.603008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.603015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.616793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.616816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.628898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.628915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.628921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.640160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.640177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.640183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.849 [2024-07-15 13:14:21.653285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:28:59.849 [2024-07-15 13:14:21.653301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.849 [2024-07-15 13:14:21.653308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.664802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.664819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.677829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.677845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.677852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.689025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.689042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.689048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.702505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.702521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.702527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.715577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.715594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.715600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.727914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.727931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.727937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.741138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.741154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.741164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.752433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.752449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.752455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.763535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.763553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.763559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.777499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.777516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.789637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.789654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.789660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.801107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.801125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.801131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.813354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.813371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.813377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.825463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.825479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.825485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.838948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.838965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.838971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.852195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.852222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.863442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.863458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.863464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.876099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.876115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.876122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.889164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.889181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.889187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.900320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.900337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.900343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.912159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.912176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.912182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.924126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.924143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.924149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.937619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.937636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.948386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.948403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.154 [2024-07-15 13:14:21.960886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.154 [2024-07-15 13:14:21.960903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.154 [2024-07-15 13:14:21.960909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:21.973520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:21.973537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:21.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:21.986683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:21.986700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:21.986706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:21.999213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:21.999233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:21.999240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.011414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.011431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.011437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.021896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.021913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.021919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.035539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.035556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.035562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.048683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.048700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.060719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.060736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.060746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.072349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.072366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.072372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.084647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.084670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.097823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.097840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.097847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.109466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.109483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.109489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.123218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.123237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.123243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.134498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.134515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.134521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.147036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.147052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.147059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.158935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.158952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.158958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.170956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.170974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.170980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.184452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.184468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.184474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.193993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.440 [2024-07-15 13:14:22.194009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.440 [2024-07-15 13:14:22.194016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.440 [2024-07-15 13:14:22.208054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.441 [2024-07-15 13:14:22.208071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.441 [2024-07-15 13:14:22.208077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.441 [2024-07-15 13:14:22.221243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.441 [2024-07-15 13:14:22.221260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.441 [2024-07-15 13:14:22.221266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.441 [2024-07-15 13:14:22.233964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.441 [2024-07-15 13:14:22.233981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.441 [2024-07-15 13:14:22.233987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.441 [2024-07-15 13:14:22.245657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.441 [2024-07-15 13:14:22.245673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.441 [2024-07-15 13:14:22.245679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.441 [2024-07-15 13:14:22.256776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.441 [2024-07-15 13:14:22.256793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.441 [2024-07-15 13:14:22.256799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.703 [2024-07-15 13:14:22.269485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f8cc70) 00:29:00.703 [2024-07-15 13:14:22.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.703 [2024-07-15 13:14:22.269511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.703 00:29:00.703 Latency(us) 00:29:00.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:00.703 nvme0n1 : 2.04 20230.99 79.03 0.00 0.00 6194.10 1911.47 46312.11 00:29:00.703 =================================================================================================================== 00:29:00.703 Total : 20230.99 79.03 0.00 0.00 6194.10 1911.47 46312.11 00:29:00.703 0 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:00.703 | .driver_specific 00:29:00.703 | .nvme_error 00:29:00.703 | .status_code 00:29:00.703 | .command_transient_transport_error' 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 871818 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 871818 ']' 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 871818 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.703 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871818 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871818' 00:29:00.964 killing process with pid 871818 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 871818 00:29:00.964 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.964 00:29:00.964 Latency(us) 00:29:00.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.964 =================================================================================================================== 00:29:00.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 871818 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=872510 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 872510 /var/tmp/bperf.sock 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 872510 ']' 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.964 13:14:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.964 [2024-07-15 13:14:22.713835] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:00.964 [2024-07-15 13:14:22.713896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872510 ] 00:29:00.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.964 Zero copy mechanism will not be used. 00:29:00.964 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.226 [2024-07-15 13:14:22.792893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.226 [2024-07-15 13:14:22.846294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.797 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.797 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:01.797 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.797 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.056 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.316 nvme0n1 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:02.316 13:14:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.316 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:02.316 Zero copy mechanism will not be used. 00:29:02.316 Running I/O for 2 seconds... 00:29:02.316 [2024-07-15 13:14:24.055390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.055424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.055437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.065153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.065175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.065182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.074925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.074946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.074953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.084450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.084470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.084477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.095180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.095198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.095204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.106048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.106066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.106073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.116040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.116057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.116063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.125631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.125648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.125654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.316 [2024-07-15 13:14:24.136203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.316 [2024-07-15 13:14:24.136221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.316 [2024-07-15 13:14:24.136227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.148396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.148415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.148421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.157982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.158000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.158007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.169295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.169313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.169320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.178988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.179006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.179012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.189450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.189475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.199926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.199950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.209887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.209905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.209912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.219891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.219910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.219916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.229139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.229157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.229167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.238363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.238380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.238387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.247782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.247800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.247806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.258073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.258091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.258097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.268388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.268405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.268412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.278784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.278801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.278807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.289303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.289320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.289326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.578 [2024-07-15 13:14:24.300536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.578 [2024-07-15 13:14:24.300553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.578 [2024-07-15 13:14:24.300560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.308881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.308900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.308906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.319504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.319526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.319532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.328334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.328359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.337838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.337856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.337862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.347214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.347237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.347243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.358368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.358386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.368695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.368713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.368719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.379890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.379908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.379915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.388554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.388573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.388579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.579 [2024-07-15 13:14:24.398178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.579 [2024-07-15 13:14:24.398197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.579 [2024-07-15 13:14:24.398203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.410316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.410334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.410340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.420275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.420293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.420300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.432030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.432049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.442357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.442377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.442383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.450866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.450885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.450891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.461127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.461146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.461152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.471962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.471981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.471987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.481829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.481847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.481854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.491516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.491536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.840 [2024-07-15 13:14:24.491546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.840 [2024-07-15 13:14:24.501411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.840 [2024-07-15 13:14:24.501430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.501436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.512111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.512130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.512136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.521563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.521581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.521588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.531942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.531960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.531967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.543262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.543280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.543286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.552893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.552912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.552919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.562774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.562793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.573637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.573655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.573661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.582813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.582834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.582841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.592519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.592538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.592544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.604656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.604674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.604681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.614981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.614999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.615006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.625741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.625760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.625766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.637475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.637493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.637499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.647470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.647488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.647495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.841 [2024-07-15 13:14:24.658748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:02.841 [2024-07-15 13:14:24.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.841 [2024-07-15 13:14:24.658771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.668592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.668611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.668617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.678225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.678247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.678254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.689549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.689567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.689573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.699403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.699422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.699428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.709824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.709843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.709849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.721132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.721151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.721157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.731361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.731380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.731386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.740372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.740390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.750452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.750470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.750477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.761557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.761575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.761585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.771766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.771785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.771791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.103 [2024-07-15 13:14:24.783094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.103 [2024-07-15 13:14:24.783113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.103 [2024-07-15 13:14:24.783120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.791686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.791703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.791710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.801338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.801357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.801363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.810537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.810556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.810562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.821481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.821500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.821506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.830182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.830201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.830207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.839941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.839958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.839965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.851418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.851437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.851443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.862365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.862384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.872140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.872160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.872166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.883040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.883059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.883065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.892708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.892727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.892734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.902095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.902120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.912439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.912458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.912464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.104 [2024-07-15 13:14:24.921961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.104 [2024-07-15 13:14:24.921979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.104 [2024-07-15 13:14:24.921986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.365 [2024-07-15 13:14:24.931459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.365 [2024-07-15 13:14:24.931478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.365 [2024-07-15 13:14:24.931488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.365 [2024-07-15 13:14:24.942889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.365 [2024-07-15 13:14:24.942908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.365 [2024-07-15 13:14:24.942914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.365 [2024-07-15 13:14:24.952854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.365 [2024-07-15 13:14:24.952872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:24.952878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:24.963395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:24.963413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:24.963420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:24.972432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:24.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:24.972456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:24.983800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:24.983819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:24.983825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:24.995178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:24.995197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:24.995203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.003016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.003035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.003041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.014274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.014292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.014298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.023149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.023171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.023177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.032689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.032708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.042989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.043008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.043014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.052237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.052256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.052262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.061393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.061411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.061417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.072396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.072415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.072421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.082152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.082171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.082177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.091925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.091944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.091950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.103195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.103214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.103220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.112758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.112777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.112783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.122393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.122411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.122417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.131208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.131227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.131238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.140697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.140716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.140722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.151125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.151144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.151150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.160684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.160702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.160708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.171924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.171942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.171948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.366 [2024-07-15 13:14:25.183106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.366 [2024-07-15 13:14:25.183125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.366 [2024-07-15 13:14:25.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.191954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.191973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.191986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.202009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.202027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.202034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.212032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.212052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.212058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.225245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.225265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.225271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.237980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.237999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.238005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.248691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.248717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.261914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.261933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.261939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.274922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.274940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.274948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.288179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.288198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.288204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.298861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.298883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.298889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.307689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.307707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.307714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.317703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.317722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.317728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.328157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.328176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.328182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.337858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.337877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.337883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.346335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.346354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.346360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.355068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.355087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.355094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.365071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.365090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.365096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.378325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.378343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.391356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.391374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.391381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.404260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.404279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.404285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.417675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.417694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.417700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.431029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.431048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.431054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.628 [2024-07-15 13:14:25.443763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.628 [2024-07-15 13:14:25.443782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.628 [2024-07-15 13:14:25.443788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.456861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.456880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.456887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.467509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.467526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.467533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.477830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.477849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.477855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.487898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.487916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.498188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.498206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.498212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.508355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.508378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.518833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.518851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.527604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.527623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.527629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.535237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.535255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.535261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.546325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.546343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.556710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.556727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.556733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.566826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.566845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.566851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.577223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.577253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.591014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.891 [2024-07-15 13:14:25.591033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.891 [2024-07-15 13:14:25.591040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.891 [2024-07-15 13:14:25.604635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.604654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.604660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.617428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.617447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.617453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.627689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.627708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.627715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.636982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.637000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.637006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.646543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.646561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.646567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.656828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.656846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.656852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.667941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.667960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.667969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.677689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.677708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.677714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.688798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.688816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.688822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.699064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.699083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.699089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.892 [2024-07-15 13:14:25.709606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:03.892 [2024-07-15 13:14:25.709623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.892 [2024-07-15 13:14:25.709629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.720499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.720518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.720524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.731607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.731632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.741513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.741531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.741538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.751704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.751722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.751728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.760683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.760705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.760711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.771340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.771359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.771365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.778893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.778917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.787391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.787408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.787414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.792016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.792034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.792040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.802044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.802062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.802069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.811819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.811837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.811843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.821711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.821729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.821736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.831410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.831428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.831434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.841738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.841755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.841762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.851972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.851991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.851997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.154 [2024-07-15 13:14:25.864615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.154 [2024-07-15 13:14:25.864634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.154 [2024-07-15 13:14:25.864640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.875717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.875735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.875741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.886869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.886886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.886893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.897466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.897485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.897491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.906511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.906528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.906535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.918346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.918363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.927262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.927280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.927290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.936582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.936601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.936607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.948083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.948100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.948106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.958401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.958418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.958424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.155 [2024-07-15 13:14:25.969189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.155 [2024-07-15 13:14:25.969207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.155 [2024-07-15 13:14:25.969213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:25.979821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:25.979839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:25.979845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:25.990086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:25.990104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:25.990111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:26.000636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:26.000655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:26.000661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:26.010918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:26.010936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:26.010942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:26.022062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:26.022083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:26.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:26.031381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:26.031399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:26.031406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.417 [2024-07-15 13:14:26.040493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11330f0) 00:29:04.417 [2024-07-15 13:14:26.040511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.417 [2024-07-15 13:14:26.040517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:04.417 00:29:04.417 Latency(us) 00:29:04.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.417 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:04.417 nvme0n1 : 2.00 2995.74 374.47 0.00 0.00 5338.19 1058.13 14090.24 00:29:04.417 =================================================================================================================== 00:29:04.417 Total : 2995.74 374.47 0.00 0.00 5338.19 1058.13 14090.24 00:29:04.417 0 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:04.417 | .driver_specific 00:29:04.417 | .nvme_error 00:29:04.417 | .status_code 00:29:04.417 | .command_transient_transport_error' 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 872510 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 872510 ']' 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 872510 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.417 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872510 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872510' 00:29:04.678 killing process with pid 872510 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 872510 00:29:04.678 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.678 00:29:04.678 Latency(us) 00:29:04.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.678 =================================================================================================================== 00:29:04.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 872510 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=873190 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 873190 /var/tmp/bperf.sock 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 873190 ']' 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:04.678 13:14:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.678 [2024-07-15 13:14:26.447142] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:04.678 [2024-07-15 13:14:26.447203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873190 ] 00:29:04.678 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.938 [2024-07-15 13:14:26.530948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.938 [2024-07-15 13:14:26.583483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.509 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.509 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:05.509 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.509 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.768 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.027 nvme0n1 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:06.027 13:14:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.027 Running I/O for 2 seconds... 00:29:06.287 [2024-07-15 13:14:27.862990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.863206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.863237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.875224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.887559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.887872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.887889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.899770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.900100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.900115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.911948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.912291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.912307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.924095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.924401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.924417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.936273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.936601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.936616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.948447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.948797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.948812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.960604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.960923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.960939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.972790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.973108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.984901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.985227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.985246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:27.997080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:27.997432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:27.997448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.009218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.009549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.009564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.021385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.021692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.021707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.033507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.033811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.033827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.045674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.046004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.046019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.057830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.058211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.287 [2024-07-15 13:14:28.069980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.287 [2024-07-15 13:14:28.070293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-07-15 13:14:28.070308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.288 [2024-07-15 13:14:28.082105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.288 [2024-07-15 13:14:28.082464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-07-15 13:14:28.082480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.288 [2024-07-15 13:14:28.094210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.288 [2024-07-15 13:14:28.094559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-07-15 13:14:28.094574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.288 [2024-07-15 13:14:28.106361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.288 [2024-07-15 13:14:28.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-07-15 13:14:28.106716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.118556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.118892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.118908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.130691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.131031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.131047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.142807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.143009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.143024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.154968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.155311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.155329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.167080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.167419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.167435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.179221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.179575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.191348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.191640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.191656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.203675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.204017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.204032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.215812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.216154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.216169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.227946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.228262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.228278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.240092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.240412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.240427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.252262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.252605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.264418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.264764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.264780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.547 [2024-07-15 13:14:28.276570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.547 [2024-07-15 13:14:28.276881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.547 [2024-07-15 13:14:28.276896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.288710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.289058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.289073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.300833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.301174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.312951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.313255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.313271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.325049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.325381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.325396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.337187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.337545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.349302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.349650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.349665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.548 [2024-07-15 13:14:28.361437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.548 [2024-07-15 13:14:28.361776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.548 [2024-07-15 13:14:28.361791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.373530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.373899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.385718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.386057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.386072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.397822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.398166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.398182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.409987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.410290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.410306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.422096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.422440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.422455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.434244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.434578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.434593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.446362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.446559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.446574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.458526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.458739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.458754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.470658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.470955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.470969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.482867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.483189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.483203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.495002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.495294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.495310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.507132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.507487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.519257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.519608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.519623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.531434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.531724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.531739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.543605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.543935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.543950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.555772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.556097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.556112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.567890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.568203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.568218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.580035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.580359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.580377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.592176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.592514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.592529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.604317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.604632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.616459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.616768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.616783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.808 [2024-07-15 13:14:28.628573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:06.808 [2024-07-15 13:14:28.628897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.808 [2024-07-15 13:14:28.628912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.640702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.641052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.641067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.652842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.653045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.653059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.665017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.665348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.665364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.677148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.677472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.677487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.689274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.689586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.689601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.701383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.701725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.701740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.713497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.713821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.725624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.725953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.725968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.737722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.738037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.738052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.749845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.750178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.750194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.761961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.762161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.762176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.069 [2024-07-15 13:14:28.774091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.069 [2024-07-15 13:14:28.774415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.069 [2024-07-15 13:14:28.774430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.786219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.786562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.798357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.798701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.798716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.810478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.810787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.810802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.822650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.822987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.823002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.834790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.835106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.835121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.846926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.847226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.847247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.859070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.859403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.871174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.871502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.871516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.070 [2024-07-15 13:14:28.883283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.070 [2024-07-15 13:14:28.883645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.070 [2024-07-15 13:14:28.883660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.895544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.895878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.895899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.907628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.907950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.907965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.919774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.920115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.920131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.931942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.932285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.932300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.944132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.944469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.944485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.956276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.956613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.968385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.968690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.968705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.980561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.980870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.980884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:28.992703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:28.993039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:28.993055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.004800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.005168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.005185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.016978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.017306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.017321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.029113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.029441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.029456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.041275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.041579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.041594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.053395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.053721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.053737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.065532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.065879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.065894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.077766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.078080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.089935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.090262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.090277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.102086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.102459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.102475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.114265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.114464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.114479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.126419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.126725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.138527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.138835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.138849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.331 [2024-07-15 13:14:29.150692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.331 [2024-07-15 13:14:29.151026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.331 [2024-07-15 13:14:29.151041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.162863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.163206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.163221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.174995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.175384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.187114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.187479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.199439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.199778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.199793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.211583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.211926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.211940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.223712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.224040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.224055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.235837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.236179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.236194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.247952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.248297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.248312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.260095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.260443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.272238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.272579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.272595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.284378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.284677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.284692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.296498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.296814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.296830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.308637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.308966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.308981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.320761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.321087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.321105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.332918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.333254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.333270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.345048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.345366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.357198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.357511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.369350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.369702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.381490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.381826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.393604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.393910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.592 [2024-07-15 13:14:29.393925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.592 [2024-07-15 13:14:29.405766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.592 [2024-07-15 13:14:29.406069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.593 [2024-07-15 13:14:29.406084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.417905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.418247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.418263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.430059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.430416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.430431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.442242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.442593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.454330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.454682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.454697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.466496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.466846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.466861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.478684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.478992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.490761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.491052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.502953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.503252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.503267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.515057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.515401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.515416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.527262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.527595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.527610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.539417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.539763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.539778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.551582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.551920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.551935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.563683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.563986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.564001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.575839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.576179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.576193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.587976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.588314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.588329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.600099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.600442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.600457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.612256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.612587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.612602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.624350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.624689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.636498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.636834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.854 [2024-07-15 13:14:29.648626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.854 [2024-07-15 13:14:29.648953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.854 [2024-07-15 13:14:29.648968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.855 [2024-07-15 13:14:29.660774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.855 [2024-07-15 13:14:29.661069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.855 [2024-07-15 13:14:29.661084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.855 [2024-07-15 13:14:29.672920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:07.855 [2024-07-15 13:14:29.673262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.855 [2024-07-15 13:14:29.673277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.685059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.685395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.685410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.697204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.697494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.697509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.709307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.709612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.709627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.721512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.721859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.721874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.733693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.733991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.734006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.745784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.746122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.746140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.757909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.116 [2024-07-15 13:14:29.758244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.116 [2024-07-15 13:14:29.758259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.116 [2024-07-15 13:14:29.770043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.770261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.782150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.782485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.782500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.794290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.794616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.794630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.806427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.806727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.806742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.818576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.818773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.818787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.830679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.830891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 [2024-07-15 13:14:29.842833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333ac0) with pdu=0x2000190f96f8 00:29:08.117 [2024-07-15 13:14:29.843173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.117 [2024-07-15 13:14:29.843188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.117 00:29:08.117 Latency(us) 00:29:08.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.117 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.117 nvme0n1 : 2.01 20936.21 81.78 0.00 0.00 6102.10 4450.99 16602.45 00:29:08.117 =================================================================================================================== 00:29:08.117 Total : 20936.21 81.78 0.00 0.00 6102.10 4450.99 16602.45 00:29:08.117 0 00:29:08.117 13:14:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:08.117 13:14:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:08.117 13:14:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:08.117 | .driver_specific 00:29:08.117 | .nvme_error 00:29:08.117 | .status_code 00:29:08.117 | .command_transient_transport_error' 00:29:08.117 13:14:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 873190 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 873190 ']' 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 873190 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873190 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873190' 00:29:08.378 killing process with pid 873190 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 873190 00:29:08.378 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.378 00:29:08.378 Latency(us) 00:29:08.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.378 =================================================================================================================== 00:29:08.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.378 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 873190 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=873876 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 873876 /var/tmp/bperf.sock 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 873876 ']' 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.639 13:14:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.639 [2024-07-15 13:14:30.263134] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:08.639 [2024-07-15 13:14:30.263191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873876 ] 00:29:08.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.639 Zero copy mechanism will not be used. 00:29:08.639 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.639 [2024-07-15 13:14:30.341682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.639 [2024-07-15 13:14:30.395031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.210 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.210 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:09.210 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.210 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.470 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.041 nvme0n1 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:10.041 13:14:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.042 Zero copy mechanism will not be used. 00:29:10.042 Running I/O for 2 seconds... 00:29:10.042 [2024-07-15 13:14:31.703174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.703491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.714562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.714912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.714933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.724498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.724841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.724860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.735740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.736074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.736092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.747255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.747590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.747607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.757332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.757578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.757595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.768106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.768446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.768464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.779889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.780219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.780241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.788500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.788836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.788853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.798228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.798575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.798594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.808437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.808765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.808782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.818814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.818942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.829800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.829901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.829916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.841470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.841793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.841810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.042 [2024-07-15 13:14:31.854961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.042 [2024-07-15 13:14:31.855312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.042 [2024-07-15 13:14:31.855328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.867388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.867736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.867752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.878117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.878351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.878367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.889140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.889520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.889537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.897903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.898236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.898252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.904596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.904938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.904955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.911760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.912104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.912121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.917815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.918138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.918154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.923105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.923318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.923334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.930532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.930741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.930757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.934971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.935189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.935205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.940792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.941001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.941016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.947876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.947937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.947952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.957680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.957893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.304 [2024-07-15 13:14:31.965709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.304 [2024-07-15 13:14:31.966039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.304 [2024-07-15 13:14:31.966056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:31.973076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:31.973411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:31.973428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:31.979606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:31.979929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:31.979945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:31.987051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:31.987297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:31.987313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:31.997562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:31.997896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:31.997912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.005336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.005713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.014498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.014826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.020605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.020690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.020707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.027090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.027437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.027454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.035106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.035435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.035451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.040927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.041263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.041279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.046541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.046860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.046876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.051293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.051503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.051518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.055722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.055932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.055948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.062327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.062537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.062552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.066797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.067005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.067020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.071694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.071908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.071925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.076552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.076758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.076774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.081076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.081303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.085384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.085591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.085607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.089397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.089603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.089619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.093643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.093848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.093864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.098420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.098625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.098641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.102666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.102871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.102887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.107469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.107676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.107691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.111432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.111636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.111652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.115664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.115870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.115885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.119644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.305 [2024-07-15 13:14:32.119849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.305 [2024-07-15 13:14:32.119864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.305 [2024-07-15 13:14:32.123620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.306 [2024-07-15 13:14:32.123826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.306 [2024-07-15 13:14:32.123842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.306 [2024-07-15 13:14:32.127654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.306 [2024-07-15 13:14:32.127858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.306 [2024-07-15 13:14:32.127873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.131548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.131751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.131766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.135448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.135654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.135669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.139432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.139637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.139653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.143374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.143580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.143600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.147280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.147485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.147500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.151323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.151528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.151544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.155256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.155458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.155473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.159074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.159282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.159298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.162893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.163095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.163111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.169014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.169310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.169327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.176478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.176788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.176805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.182716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.183005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.183021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.192169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.192532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.192549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.201118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.201458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.201475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.206970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.207286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.207302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.212567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.212904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.212921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.218899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.219118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.219134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.226636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.226973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.226989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.235168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.235522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.235538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.245392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.245719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.568 [2024-07-15 13:14:32.245735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.568 [2024-07-15 13:14:32.254528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.568 [2024-07-15 13:14:32.254744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.254763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.265693] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.265905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.265921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.277274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.277593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.277610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.285817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.286148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.286164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.295128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.295376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.305908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.306133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.306149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.314532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.314615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.314629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.324619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.324946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.324963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.334039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.334374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.334390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.343172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.343512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.343528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.351130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.351474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.351491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.359632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.359940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.359957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.367879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.368219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.374802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.375105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.375122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.380915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.381223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.381244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.569 [2024-07-15 13:14:32.389081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.569 [2024-07-15 13:14:32.389149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.569 [2024-07-15 13:14:32.389164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.396585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.396947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.402584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.402667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.402682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.412337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.412696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.412712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.418976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.419175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.419191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.423527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.423728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.423743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.430300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.430498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.430514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.436835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.437040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.437056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.446567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.446766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.446781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.453535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.453885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.453901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.460195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.460422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.460438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.469101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.469477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.476586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.476893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.476910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.485608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.485806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.485822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.490533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.490756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.490772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.498421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.498743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.498759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.505357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.505558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.505574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.510660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.510986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.511002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.516084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.516288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.516303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.520238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.520433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.525109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.525364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.525379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.530791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.530986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.531002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.537013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.537347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.537363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.546609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.831 [2024-07-15 13:14:32.547066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.831 [2024-07-15 13:14:32.547083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.831 [2024-07-15 13:14:32.555254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.555536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.555552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.562162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.562469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.562485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.569841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.570042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.570058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.577363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.577683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.577700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.585932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.586153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.586169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.596005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.596324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.596340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.606999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.607340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.607356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.618291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.618727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.618744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.630210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.630733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.642254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.642658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.642675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.832 [2024-07-15 13:14:32.653288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:10.832 [2024-07-15 13:14:32.653696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.832 [2024-07-15 13:14:32.653712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.663625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.663953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.663970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.673906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.674273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.674290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.684217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.684565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.684585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.695586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.695983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.695999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.705424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.705790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.705806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.714359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.714679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.714695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.722420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.722747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.722764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.730204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.730576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.738055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.738383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.738399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.746096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.746478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.746494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.752199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.752619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.752636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.759607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.759824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.759840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.768206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.768590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.768607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.775340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.775537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.775552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.780783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.781160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.093 [2024-07-15 13:14:32.781176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.093 [2024-07-15 13:14:32.788990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.093 [2024-07-15 13:14:32.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.789204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.797568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.797914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.797930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.805493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.805779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.805795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.813709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.814061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.814077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.821109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.821508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.821529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.829313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.829674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.829690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.837070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.837413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.837429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.846128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.846440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.846457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.854321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.854639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.854656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.862834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.863037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.863053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.871093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.871438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.871454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.878632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.879013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.879029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.886996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.887352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.887368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.895214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.895617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.895633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.902278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.902640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.909457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.909812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.909828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.094 [2024-07-15 13:14:32.916692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.094 [2024-07-15 13:14:32.916906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.094 [2024-07-15 13:14:32.916922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.925498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.925837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.925853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.933318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.933714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.933731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.940592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.940866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.940883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.950904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.951213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.960503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.960821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.960838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.969502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.969811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.969827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.979102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.979323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.979339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.988776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.989160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.989176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:32.998288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:32.998658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:32.998674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.008828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.009127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.009143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.018263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.018648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.018664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.028095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.028294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.028309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.039000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.039257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.039273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.048631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.048861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.048879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.058568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.058885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.058902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.067820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.068203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.077035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.077454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.077471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.086831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.087125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.087142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.095783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.096167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.096183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.105641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.105992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.106008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.115491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.355 [2024-07-15 13:14:33.115845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.355 [2024-07-15 13:14:33.115862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.355 [2024-07-15 13:14:33.124795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.125170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.125186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.356 [2024-07-15 13:14:33.135255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.135608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.135625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.356 [2024-07-15 13:14:33.145047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.145394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.145411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.356 [2024-07-15 13:14:33.155132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.155514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.155530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.356 [2024-07-15 13:14:33.165512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.165737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.165753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.356 [2024-07-15 13:14:33.175098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.356 [2024-07-15 13:14:33.175309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.356 [2024-07-15 13:14:33.175324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.615 [2024-07-15 13:14:33.186093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.615 [2024-07-15 13:14:33.186462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.615 [2024-07-15 13:14:33.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.615 [2024-07-15 13:14:33.197165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.615 [2024-07-15 13:14:33.197591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.615 [2024-07-15 13:14:33.197608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.615 [2024-07-15 13:14:33.205878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.615 [2024-07-15 13:14:33.206130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.615 [2024-07-15 13:14:33.206155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.615 [2024-07-15 13:14:33.213923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.615 [2024-07-15 13:14:33.214275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.214291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.223267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.223650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.223667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.233063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.233223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.233244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.242116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.242371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.242387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.251796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.252157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.252173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.260847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.261194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.261211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.269008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.269211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.269226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.277862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.278182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.278198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.286205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.286549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.286566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.294923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.295236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.295255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.300906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.301074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.301089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.309084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.309256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.309271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.317816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.318092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.318108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.325862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.326205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.326221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.336620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.336807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.336823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.346012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.346426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.355636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.355963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.355979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.365334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.365576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.365591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.374127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.374321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.374337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.382846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.383144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.383160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.392056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.392370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.392386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.401767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.402095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.402112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.411926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.412217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.412238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.420776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.421192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.421208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.616 [2024-07-15 13:14:33.431309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.616 [2024-07-15 13:14:33.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.616 [2024-07-15 13:14:33.431676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.441829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.442175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.442193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.452340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.452680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.452696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.462495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.462828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.462844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.472926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.473198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.473214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.483265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.483440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.483456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.493566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.493856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.493873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.504204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.504658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.504675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.512923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.513216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.513236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.522248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.522635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.522651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.531640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.532003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.532019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.539838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.540243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.540262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.549240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.549591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.549607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.557377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.557730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.557746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.566037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.566233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.566248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.573350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.573686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.573703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.581217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.581568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.590496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.590713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.590728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.595202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.595385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.600306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.600474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.600489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.605705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.606027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.610760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.610924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.610939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.614898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.615060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.615076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.623865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.624156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.630924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.631089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.631104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.640606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.640772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.640787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.649356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.649643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.649659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.657497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.657809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.657825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.665547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.665854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.665873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.674528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.674831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.681400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.681508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.681523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.687338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.687635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.687651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.877 [2024-07-15 13:14:33.694031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2333bf0) with pdu=0x2000190fef90 00:29:11.877 [2024-07-15 13:14:33.694216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.877 [2024-07-15 13:14:33.694236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.877 00:29:11.877 Latency(us) 00:29:11.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.877 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.877 nvme0n1 : 2.00 3828.69 478.59 0.00 0.00 4172.27 1829.55 12834.13 00:29:11.877 =================================================================================================================== 00:29:11.877 Total : 3828.69 478.59 0.00 0.00 4172.27 1829.55 12834.13 00:29:12.137 0 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:12.137 | .driver_specific 00:29:12.137 | .nvme_error 00:29:12.137 | .status_code 00:29:12.137 | .command_transient_transport_error' 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 247 > 0 )) 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 873876 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 873876 ']' 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 873876 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873876 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873876' 00:29:12.137 killing process with pid 873876 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 873876 00:29:12.137 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.137 00:29:12.137 Latency(us) 00:29:12.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.137 =================================================================================================================== 00:29:12.137 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.137 13:14:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 873876 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 871486 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 871486 ']' 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 871486 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 871486 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 871486' 00:29:12.397 killing process with pid 871486 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 871486 00:29:12.397 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 871486 00:29:12.658 00:29:12.658 real 0m16.417s 00:29:12.658 user 0m32.178s 00:29:12.658 sys 0m3.364s 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.658 ************************************ 00:29:12.658 END TEST nvmf_digest_error 00:29:12.658 ************************************ 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.658 rmmod nvme_tcp 00:29:12.658 rmmod nvme_fabrics 00:29:12.658 rmmod nvme_keyring 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 871486 ']' 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 871486 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 871486 ']' 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 871486 00:29:12.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (871486) - No such process 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 871486 is not found' 00:29:12.658 Process with pid 871486 is not found 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.658 13:14:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.201 13:14:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.201 00:29:15.201 real 0m43.300s 00:29:15.201 user 1m5.999s 00:29:15.201 sys 0m13.073s 00:29:15.201 13:14:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.201 13:14:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:15.201 ************************************ 00:29:15.201 END TEST nvmf_digest 00:29:15.201 ************************************ 00:29:15.201 13:14:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:15.201 13:14:36 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:15.201 13:14:36 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:15.201 13:14:36 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:15.201 13:14:36 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:15.201 13:14:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:15.201 13:14:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.201 13:14:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.201 ************************************ 00:29:15.201 START TEST nvmf_bdevperf 00:29:15.201 ************************************ 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:15.201 * Looking for test storage... 00:29:15.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.201 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.202 13:14:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:23.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:23.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:23.372 Found net devices under 0000:31:00.0: cvl_0_0 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:23.372 Found net devices under 0000:31:00.1: cvl_0_1 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.372 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:29:23.373 00:29:23.373 --- 10.0.0.2 ping statistics --- 00:29:23.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.373 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:29:23.373 00:29:23.373 --- 10.0.0.1 ping statistics --- 00:29:23.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.373 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=879413 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 879413 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 879413 ']' 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.373 13:14:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:23.373 [2024-07-15 13:14:45.047228] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:23.373 [2024-07-15 13:14:45.047298] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.373 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.373 [2024-07-15 13:14:45.143491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.633 [2024-07-15 13:14:45.238705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.633 [2024-07-15 13:14:45.238767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.633 [2024-07-15 13:14:45.238780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.633 [2024-07-15 13:14:45.238787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.633 [2024-07-15 13:14:45.238793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.633 [2024-07-15 13:14:45.238931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.633 [2024-07-15 13:14:45.239094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.633 [2024-07-15 13:14:45.239095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 [2024-07-15 13:14:45.876440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 Malloc0 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.203 [2024-07-15 13:14:45.942673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:24.203 { 00:29:24.203 "params": { 00:29:24.203 "name": "Nvme$subsystem", 00:29:24.203 "trtype": "$TEST_TRANSPORT", 00:29:24.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.203 "adrfam": "ipv4", 00:29:24.203 "trsvcid": "$NVMF_PORT", 00:29:24.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.203 "hdgst": ${hdgst:-false}, 00:29:24.203 "ddgst": ${ddgst:-false} 00:29:24.203 }, 00:29:24.203 "method": "bdev_nvme_attach_controller" 00:29:24.203 } 00:29:24.203 EOF 00:29:24.203 )") 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:24.203 13:14:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:24.203 "params": { 00:29:24.203 "name": "Nvme1", 00:29:24.203 "trtype": "tcp", 00:29:24.203 "traddr": "10.0.0.2", 00:29:24.203 "adrfam": "ipv4", 00:29:24.203 "trsvcid": "4420", 00:29:24.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.203 "hdgst": false, 00:29:24.203 "ddgst": false 00:29:24.203 }, 00:29:24.203 "method": "bdev_nvme_attach_controller" 00:29:24.203 }' 00:29:24.203 [2024-07-15 13:14:45.997280] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:24.203 [2024-07-15 13:14:45.997328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879596 ] 00:29:24.203 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.462 [2024-07-15 13:14:46.061193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.462 [2024-07-15 13:14:46.125664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.462 Running I/O for 1 seconds... 00:29:25.843 00:29:25.843 Latency(us) 00:29:25.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:25.843 Verification LBA range: start 0x0 length 0x4000 00:29:25.843 Nvme1n1 : 1.00 8559.47 33.44 0.00 0.00 14888.38 431.79 16165.55 00:29:25.843 =================================================================================================================== 00:29:25.843 Total : 8559.47 33.44 0.00 0.00 14888.38 431.79 16165.55 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=879927 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:25.843 { 00:29:25.843 "params": { 00:29:25.843 "name": "Nvme$subsystem", 00:29:25.843 "trtype": "$TEST_TRANSPORT", 00:29:25.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.843 "adrfam": "ipv4", 00:29:25.843 "trsvcid": "$NVMF_PORT", 00:29:25.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.843 "hdgst": ${hdgst:-false}, 00:29:25.843 "ddgst": ${ddgst:-false} 00:29:25.843 }, 00:29:25.843 "method": "bdev_nvme_attach_controller" 00:29:25.843 } 00:29:25.843 EOF 00:29:25.843 )") 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:25.843 13:14:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:25.843 "params": { 00:29:25.843 "name": "Nvme1", 00:29:25.843 "trtype": "tcp", 00:29:25.843 "traddr": "10.0.0.2", 00:29:25.843 "adrfam": "ipv4", 00:29:25.843 "trsvcid": "4420", 00:29:25.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.843 "hdgst": false, 00:29:25.843 "ddgst": false 00:29:25.843 }, 00:29:25.843 "method": "bdev_nvme_attach_controller" 00:29:25.843 }' 00:29:25.843 [2024-07-15 13:14:47.459087] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:25.844 [2024-07-15 13:14:47.459141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879927 ] 00:29:25.844 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.844 [2024-07-15 13:14:47.525218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.844 [2024-07-15 13:14:47.588751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.102 Running I/O for 15 seconds... 00:29:28.638 13:14:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 879413 00:29:28.638 13:14:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:28.638 [2024-07-15 13:14:50.426730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.426988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.426999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.427011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.427042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.427094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.638 [2024-07-15 13:14:50.427121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.638 [2024-07-15 13:14:50.427270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.638 [2024-07-15 13:14:50.427278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.639 [2024-07-15 13:14:50.427296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.639 [2024-07-15 13:14:50.427314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.639 [2024-07-15 13:14:50.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:28.639 [2024-07-15 13:14:50.427485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.427986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.427996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.639 [2024-07-15 13:14:50.428664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.639 [2024-07-15 13:14:50.428674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.428984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.428993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.429001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.429020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.429040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.429057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.640 [2024-07-15 13:14:50.429074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359940 is same with the state(5) to be set 00:29:28.640 [2024-07-15 13:14:50.429093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:28.640 [2024-07-15 13:14:50.429099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:28.640 [2024-07-15 13:14:50.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68448 len:8 PRP1 0x0 PRP2 0x0 00:29:28.640 [2024-07-15 13:14:50.429114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:28.640 [2024-07-15 13:14:50.429154] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2359940 was disconnected and freed. reset controller. 00:29:28.640 [2024-07-15 13:14:50.432709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.640 [2024-07-15 13:14:50.432757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.640 [2024-07-15 13:14:50.433691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 13:14:50.433729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.640 [2024-07-15 13:14:50.433740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.640 [2024-07-15 13:14:50.433983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.640 [2024-07-15 13:14:50.434208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.640 [2024-07-15 13:14:50.434216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.640 [2024-07-15 13:14:50.434225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.640 [2024-07-15 13:14:50.437800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.640 [2024-07-15 13:14:50.446824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.640 [2024-07-15 13:14:50.447543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 13:14:50.447580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.640 [2024-07-15 13:14:50.447591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.640 [2024-07-15 13:14:50.447832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.640 [2024-07-15 13:14:50.448056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.640 [2024-07-15 13:14:50.448066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.640 [2024-07-15 13:14:50.448075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.640 [2024-07-15 13:14:50.451641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.640 [2024-07-15 13:14:50.460642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.640 [2024-07-15 13:14:50.461221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.640 [2024-07-15 13:14:50.461244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.640 [2024-07-15 13:14:50.461253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.640 [2024-07-15 13:14:50.461473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.461693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.461702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.461711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.465275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.474496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.475063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.475079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.475086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.475312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.475532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.475540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.475547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.479098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.488321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.488913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.488928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.488936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.489155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.489381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.489391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.489398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.492947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.502163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.502616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.502635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.502643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.502862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.503081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.503089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.503096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.506649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.516078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.516667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.516683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.516690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.516909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.517128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.517136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.517143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.520699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.529913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.530459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.530474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.530482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.530701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.530920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.902 [2024-07-15 13:14:50.530928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.902 [2024-07-15 13:14:50.530935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.902 [2024-07-15 13:14:50.534490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.902 [2024-07-15 13:14:50.543710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.902 [2024-07-15 13:14:50.544338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.902 [2024-07-15 13:14:50.544376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.902 [2024-07-15 13:14:50.544388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.902 [2024-07-15 13:14:50.544631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.902 [2024-07-15 13:14:50.544858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.544868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.544875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.548436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.557648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.558357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.558394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.558406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.558647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.558870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.558880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.558887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.562447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.571461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.572048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.572085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.572096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.572344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.572569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.572578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.572586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.576140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.585356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.586024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.586061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.586071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.586319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.586543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.586551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.586559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.590109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.599338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.599921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.599939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.599947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.600168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.600394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.600402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.600409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.603955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.613165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.613709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.613725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.613733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.613953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.614172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.614179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.614186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.617741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.627157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.627802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.627840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.627850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.628090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.628321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.628330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.628338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.631892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.641106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.641668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.641689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.641701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.641922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.642141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.642154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.642162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.645715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.654925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.655578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.655615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.655626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.655866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.656089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.656098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.656105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.659664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.668883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.669463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.669483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.669491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.669711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.669930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.669938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.669945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.673495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.682704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.683341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.683378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.683390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.683633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.683857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.903 [2024-07-15 13:14:50.683870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.903 [2024-07-15 13:14:50.683877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.903 [2024-07-15 13:14:50.687434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.903 [2024-07-15 13:14:50.696645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.903 [2024-07-15 13:14:50.697036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.903 [2024-07-15 13:14:50.697055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.903 [2024-07-15 13:14:50.697063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.903 [2024-07-15 13:14:50.697289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.903 [2024-07-15 13:14:50.697510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.904 [2024-07-15 13:14:50.697517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.904 [2024-07-15 13:14:50.697525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.904 [2024-07-15 13:14:50.701073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.904 [2024-07-15 13:14:50.710498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.904 [2024-07-15 13:14:50.711051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.904 [2024-07-15 13:14:50.711066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.904 [2024-07-15 13:14:50.711074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.904 [2024-07-15 13:14:50.711299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:28.904 [2024-07-15 13:14:50.711519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.904 [2024-07-15 13:14:50.711527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.904 [2024-07-15 13:14:50.711535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.904 [2024-07-15 13:14:50.715083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.904 [2024-07-15 13:14:50.724300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.904 [2024-07-15 13:14:50.724978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.904 [2024-07-15 13:14:50.725016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:28.904 [2024-07-15 13:14:50.725026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:28.904 [2024-07-15 13:14:50.725276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.725500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.725511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.725518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.729077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.738089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.738815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.738852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.738863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.739102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.739335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.739344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.739351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.742906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.751910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.752429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.752466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.752478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.752721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.752944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.752953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.752960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.756523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.765753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.766447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.766484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.766495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.766734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.766957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.766966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.766974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.770538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.779755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.780415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.780452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.780463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.780706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.780929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.780937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.780945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.784505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.793718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.794340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.794377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.794389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.794632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.794855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.794863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.794871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.798434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.807656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.808241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.808261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.808269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.808490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.808709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.808717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.808724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.812277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.821482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.822139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.822176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.822186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.822435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.822660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.822668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.822680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.826241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.835457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.836155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.836192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.836204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.836454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.836678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.836686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.836694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.840246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.849250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.165 [2024-07-15 13:14:50.849840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.165 [2024-07-15 13:14:50.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.165 [2024-07-15 13:14:50.849889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.165 [2024-07-15 13:14:50.850130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.165 [2024-07-15 13:14:50.850364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.165 [2024-07-15 13:14:50.850375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.165 [2024-07-15 13:14:50.850382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.165 [2024-07-15 13:14:50.853936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.165 [2024-07-15 13:14:50.863152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.863868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.863905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.863916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.864155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.864390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.864400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.864408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.867963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.876969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.877428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.877448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.877456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.877678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.877897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.877905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.877912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.881469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.890886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.891550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.891587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.891598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.891837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.892060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.892068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.892076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.895641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.904860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.905536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.905573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.905584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.905823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.906047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.906055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.906062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.909628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.918851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.919536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.919573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.919586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.919828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.920056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.920065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.920072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.923638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.932863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.933567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.933604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.933615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.933855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.934078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.934087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.934094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.937652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.946867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.947572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.947609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.947620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.947858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.948081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.948089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.948097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.951652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.960910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.961375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.961413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.961424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.961664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.961887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.961895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.961903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.965484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.166 [2024-07-15 13:14:50.974916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.166 [2024-07-15 13:14:50.975588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.166 [2024-07-15 13:14:50.975625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.166 [2024-07-15 13:14:50.975636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.166 [2024-07-15 13:14:50.975876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.166 [2024-07-15 13:14:50.976099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.166 [2024-07-15 13:14:50.976107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.166 [2024-07-15 13:14:50.976115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.166 [2024-07-15 13:14:50.979675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.445 [2024-07-15 13:14:50.988887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.445 [2024-07-15 13:14:50.989559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.445 [2024-07-15 13:14:50.989597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:50.989608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:50.989847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:50.990070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:50.990079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:50.990086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:50.993646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.002852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.003479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.003498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.003506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.003727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.003946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.003955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.003962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.007512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.016733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.017423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.017460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.017477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.017717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.017940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.017949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.017957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.021518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.030729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.031423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.031461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.031472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.031712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.031935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.031944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.031952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.035515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.044727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.045309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.045328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.045336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.045556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.045776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.045785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.045792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.049339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.058546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.059199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.059243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.059255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.059496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.059725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.059734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.059741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.063294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.072514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.073211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.073256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.073269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.073509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.073732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.073741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.073749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.077305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.086323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.086888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.086906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.086914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.087134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.087447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.087458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.087465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.091018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.100237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.100914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.100952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.100962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.101202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.101436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.101446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.101453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.105005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.114222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.114929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.114966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.114977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.115216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.115448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.115458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.115465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.446 [2024-07-15 13:14:51.119018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.446 [2024-07-15 13:14:51.128023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.446 [2024-07-15 13:14:51.128701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.446 [2024-07-15 13:14:51.128738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.446 [2024-07-15 13:14:51.128749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.446 [2024-07-15 13:14:51.128988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.446 [2024-07-15 13:14:51.129211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.446 [2024-07-15 13:14:51.129220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.446 [2024-07-15 13:14:51.129227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.132789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.142000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.142562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.142581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.142589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.142809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.143029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.143036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.143043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.146601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.155810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.156541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.156578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.156593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.156832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.157056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.157064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.157072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.160633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.169648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.170329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.170367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.170379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.170619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.170842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.170851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.170859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.174419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.183631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.184295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.184332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.184344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.184585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.184808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.184817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.184825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.188386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.197615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.198208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.198251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.198264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.198506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.198729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.198742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.198751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.202307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.211517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.212223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.212267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.212278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.212517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.212740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.212749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.212757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.216314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.225316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.225929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.225966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.225977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.226216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.226448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.226457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.226465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.230016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.239231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.239887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.239924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.239935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.240175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.240410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.240419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.240427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.243979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.447 [2024-07-15 13:14:51.253233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.447 [2024-07-15 13:14:51.253945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.447 [2024-07-15 13:14:51.253981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.447 [2024-07-15 13:14:51.253992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.447 [2024-07-15 13:14:51.254241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.447 [2024-07-15 13:14:51.254465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.447 [2024-07-15 13:14:51.254474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.447 [2024-07-15 13:14:51.254481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.447 [2024-07-15 13:14:51.258036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.267049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.267759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.267796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.267807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.268047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.268280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.268289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.268296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.271849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.280853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.281549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.281587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.281599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.281839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.282063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.282071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.282079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.285641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.294852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.295327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.295365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.295377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.295624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.295847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.295856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.295863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.299426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.308845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.309508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.309545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.309556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.309796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.310019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.310027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.310035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.313596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.322810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.323498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.323545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.323784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.324008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.324016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.324024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.327584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.336808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.337386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.337404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.337412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.337632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.337852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.337859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.337871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.341423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.350669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.351224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.351244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.351252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.351471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.351690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.351698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.351705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.355254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.364469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.365158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.365195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.365207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.738 [2024-07-15 13:14:51.365459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.738 [2024-07-15 13:14:51.365683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.738 [2024-07-15 13:14:51.365691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.738 [2024-07-15 13:14:51.365699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.738 [2024-07-15 13:14:51.369253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.738 [2024-07-15 13:14:51.378463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.738 [2024-07-15 13:14:51.379098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.738 [2024-07-15 13:14:51.379135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.738 [2024-07-15 13:14:51.379145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.379393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.379618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.379626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.379634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.383185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.392400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.393100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.393141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.393152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.393400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.393624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.393632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.393640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.397191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.406199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.406794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.406831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.406841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.407081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.407312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.407321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.407329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.410882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.420096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.420707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.420745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.420755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.420995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.421218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.421227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.421243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.424796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.434006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.434697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.434735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.434746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.434985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.435213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.435223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.435239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.438793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.448008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.448723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.448761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.448772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.449011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.449242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.449251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.449259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.452812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.461898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.462566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.462614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.462854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.463078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.463086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.463094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.466665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.475874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.476553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.476590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.476601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.476840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.477063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.477071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.477079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.480646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.489863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.490550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.490587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.490597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.490837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.491060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.491069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.491076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.494639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.503852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.504548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.504585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.504596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.504835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.505058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.505066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.505074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.508635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.517847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.518285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.518303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.739 [2024-07-15 13:14:51.518312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.739 [2024-07-15 13:14:51.518532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.739 [2024-07-15 13:14:51.518751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.739 [2024-07-15 13:14:51.518759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.739 [2024-07-15 13:14:51.518766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.739 [2024-07-15 13:14:51.522317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.739 [2024-07-15 13:14:51.531734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.739 [2024-07-15 13:14:51.532336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.739 [2024-07-15 13:14:51.532373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.740 [2024-07-15 13:14:51.532390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.740 [2024-07-15 13:14:51.532632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.740 [2024-07-15 13:14:51.532855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.740 [2024-07-15 13:14:51.532864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.740 [2024-07-15 13:14:51.532871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.740 [2024-07-15 13:14:51.536434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.740 [2024-07-15 13:14:51.545650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.740 [2024-07-15 13:14:51.546310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-15 13:14:51.546347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.740 [2024-07-15 13:14:51.546358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.740 [2024-07-15 13:14:51.546597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.740 [2024-07-15 13:14:51.546821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.740 [2024-07-15 13:14:51.546830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.740 [2024-07-15 13:14:51.546837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.740 [2024-07-15 13:14:51.550401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.740 [2024-07-15 13:14:51.559619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.740 [2024-07-15 13:14:51.560303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.740 [2024-07-15 13:14:51.560340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:29.740 [2024-07-15 13:14:51.560351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:29.740 [2024-07-15 13:14:51.560591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:29.740 [2024-07-15 13:14:51.560814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.740 [2024-07-15 13:14:51.560823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.740 [2024-07-15 13:14:51.560830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.564393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.573617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.574321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.574358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.574370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.574614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.001 [2024-07-15 13:14:51.574837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.001 [2024-07-15 13:14:51.574849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.001 [2024-07-15 13:14:51.574857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.578420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.587426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.588013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.588050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.588061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.588309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.001 [2024-07-15 13:14:51.588534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.001 [2024-07-15 13:14:51.588542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.001 [2024-07-15 13:14:51.588550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.592102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.601319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.601999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.602036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.602047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.602296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.001 [2024-07-15 13:14:51.602520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.001 [2024-07-15 13:14:51.602529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.001 [2024-07-15 13:14:51.602536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.606087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.615302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.616006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.616043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.616054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.616301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.001 [2024-07-15 13:14:51.616525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.001 [2024-07-15 13:14:51.616534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.001 [2024-07-15 13:14:51.616541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.620093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.629105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.629816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.629853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.629863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.630103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.001 [2024-07-15 13:14:51.630334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.001 [2024-07-15 13:14:51.630343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.001 [2024-07-15 13:14:51.630351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.001 [2024-07-15 13:14:51.633902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.001 [2024-07-15 13:14:51.642915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.001 [2024-07-15 13:14:51.643619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-07-15 13:14:51.643656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.001 [2024-07-15 13:14:51.643667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.001 [2024-07-15 13:14:51.643906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.644130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.644138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.644146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.647707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.656923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.657597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.657635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.657645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.657885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.658108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.658117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.658124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.661687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.670914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.671593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.671631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.671649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.671889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.672113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.672121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.672129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.675690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.684905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.685452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.685471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.685479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.685700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.685919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.685928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.685935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.689486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.698904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.699461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.699477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.699485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.699704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.699923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.699931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.699937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.703487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.712694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.713345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.713383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.713395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.713636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.713859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.713868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.713879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.717442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.726657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.727293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.727312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.727320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.727541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.727762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.727770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.727777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.731328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.740538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.741102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.741140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.741152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.741400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.741624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.741633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.741640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.745193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.754410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.755027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.755045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.755053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.755280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.755501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.755509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.755516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.759064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.768298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.768925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.768962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.768974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.769259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.769487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.769495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.769502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.773055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.782280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.782908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.782926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.782934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.783154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.783382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.783390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.783397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.786945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.796157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.796836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.796873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.796884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.797123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.797354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.797363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.797370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.800924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.810140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.810723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.810750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.810975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.811195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.811203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.811210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.002 [2024-07-15 13:14:51.814764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.002 [2024-07-15 13:14:51.823973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.002 [2024-07-15 13:14:51.824632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-07-15 13:14:51.824669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.002 [2024-07-15 13:14:51.824680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.002 [2024-07-15 13:14:51.824919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.002 [2024-07-15 13:14:51.825142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.002 [2024-07-15 13:14:51.825151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.002 [2024-07-15 13:14:51.825158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.828722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.837939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.838626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.838663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.838674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.838914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.839137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.839146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.839153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.842712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.851929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.852579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.852599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.852607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.852827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.853047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.853054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.853066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.856618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.865834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.866508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.866545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.866556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.866795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.867018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.867027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.867035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.870605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.879823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.880372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.880392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.880400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.880620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.880839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.880847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.880854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.884408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.893621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.894222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.894242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.894250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.894469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.894688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.894697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.894705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.898257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.907470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.908159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.908200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.908211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.908458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.908682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.908691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.908698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.912255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.921472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.922176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.922214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.922226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.922474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.922697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.922706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.922713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.926266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.935273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.935855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.935873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.935881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.936101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.936327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.936336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.936343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.939893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.949110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.949771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.949808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.949819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.950058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.950294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.950304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.950312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.953863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.963075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.264 [2024-07-15 13:14:51.963701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.264 [2024-07-15 13:14:51.963719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.264 [2024-07-15 13:14:51.963727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.264 [2024-07-15 13:14:51.963947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.264 [2024-07-15 13:14:51.964167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.264 [2024-07-15 13:14:51.964174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.264 [2024-07-15 13:14:51.964181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.264 [2024-07-15 13:14:51.967746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.264 [2024-07-15 13:14:51.976960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:51.977641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:51.977678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:51.977689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:51.977929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:51.978151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:51.978160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:51.978167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:51.981728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:51.990966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:51.991387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:51.991405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:51.991413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:51.991633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:51.991852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:51.991861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:51.991868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:51.995426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.004846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.005379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.005416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.005428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.005671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.005894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.005903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.005911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.009473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.018689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.019313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.019350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.019362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.019605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.019828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.019837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.019845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.023407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.032625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.033241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.033260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.033268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.033488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.033708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.033715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.033722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.037273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.046485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.046933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.046952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.046964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.047185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.047452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.047462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.047469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.051022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.060447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.061140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.061189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.061440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.061664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.061673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.061680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.065237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.074262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.265 [2024-07-15 13:14:52.074842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.265 [2024-07-15 13:14:52.074859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.265 [2024-07-15 13:14:52.074868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.265 [2024-07-15 13:14:52.075087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.265 [2024-07-15 13:14:52.075314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.265 [2024-07-15 13:14:52.075324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.265 [2024-07-15 13:14:52.075330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.265 [2024-07-15 13:14:52.078879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.265 [2024-07-15 13:14:52.088088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.526 [2024-07-15 13:14:52.088663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.526 [2024-07-15 13:14:52.088700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.526 [2024-07-15 13:14:52.088711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.088951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.089174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.089188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.089195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.092760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.101975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.102672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.102709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.102720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.102960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.103183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.103192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.103200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.106761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.115975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.116649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.116686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.116696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.116935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.117158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.117167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.117174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.120738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.129954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.130628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.130665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.130677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.130917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.131140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.131149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.131157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.134716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.143938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.144433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.144470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.144482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.144725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.144948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.144958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.144965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.148527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.157738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.158349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.158368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.158376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.158597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.158816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.158825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.158832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.162383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.171604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.172209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.172225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.172238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.172458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.172678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.172685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.172692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.176244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.185449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.186009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.186046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.186058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.186313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.186538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.186547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.186555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.190107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.199328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.200006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.200043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.200053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.200300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.200524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.200533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.200540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.204092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.213308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.213947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.213984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.213996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.214243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.214467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.214476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.214484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.218036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.227253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.227924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.227961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.227971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.228210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.228440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.228450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.228461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.232019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.241236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.241921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.241958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.241968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.242208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.242438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.242447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.242455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.246008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.255221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.255801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.255819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.255827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.256047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.256273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.256281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.256288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.259835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.269059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.269762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.269799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.269810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.270049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.270280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.270290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.270297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.273851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.282861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.283477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.283496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.283504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.283724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.283944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.283951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.283958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.287509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.296724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.297295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.297312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.297319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.297539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.297758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.297766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.297773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.301324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.310534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.311112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.527 [2024-07-15 13:14:52.311127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.527 [2024-07-15 13:14:52.311134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.527 [2024-07-15 13:14:52.311358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.527 [2024-07-15 13:14:52.311578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.527 [2024-07-15 13:14:52.311586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.527 [2024-07-15 13:14:52.311593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.527 [2024-07-15 13:14:52.315138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.527 [2024-07-15 13:14:52.324351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.527 [2024-07-15 13:14:52.325017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.528 [2024-07-15 13:14:52.325054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.528 [2024-07-15 13:14:52.325064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.528 [2024-07-15 13:14:52.325315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.528 [2024-07-15 13:14:52.325539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.528 [2024-07-15 13:14:52.325549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.528 [2024-07-15 13:14:52.325556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.528 [2024-07-15 13:14:52.329110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.528 [2024-07-15 13:14:52.338331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.528 [2024-07-15 13:14:52.338945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.528 [2024-07-15 13:14:52.338963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.528 [2024-07-15 13:14:52.338971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.528 [2024-07-15 13:14:52.339190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.528 [2024-07-15 13:14:52.339416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.528 [2024-07-15 13:14:52.339425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.528 [2024-07-15 13:14:52.339432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.528 [2024-07-15 13:14:52.342979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.352192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.352683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.352699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.352707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.352926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.353145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.353152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.353159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.356710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.366130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.366735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.366750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.366758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.366977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.367196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.367203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.367218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.370783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.379992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.380550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.380566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.380573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.380793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.381013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.381021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.381028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.384584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.393797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.394530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.394568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.394579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.394818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.395042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.395050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.395058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.398618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.407630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.408325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.408363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.408375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.408618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.408841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.408851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.408858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.412420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.421428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.422124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.422165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.422176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.422423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.422647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.422656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.422663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.426219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.435226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.435936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.435973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.435984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.436224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.436456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.436464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.436472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.440025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.449031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.449638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.449656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.449664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.449884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.450103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.450111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.450118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.453669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.462871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.463528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.463565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.463576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.463815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.464043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.464052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.464059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.467630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.476844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.477555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.477593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.477603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.477843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.478066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.478074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.478082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.481641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.490732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.491344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.491381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.491393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.491636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.491859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.789 [2024-07-15 13:14:52.491868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.789 [2024-07-15 13:14:52.491875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.789 [2024-07-15 13:14:52.495435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.789 [2024-07-15 13:14:52.504644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.789 [2024-07-15 13:14:52.505276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.789 [2024-07-15 13:14:52.505313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.789 [2024-07-15 13:14:52.505326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.789 [2024-07-15 13:14:52.505569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.789 [2024-07-15 13:14:52.505792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.505800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.505808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.509378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.518591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.519147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.519184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.519196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.519447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.519671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.519680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.519687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.523242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.532453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.533022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.533041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.533048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.533275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.533496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.533503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.533510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.537056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.546265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.546949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.546986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.546997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.547246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.547470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.547479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.547486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.551039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.560251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.560937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.560973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.560989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.561238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.561462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.561470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.561478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.565030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.574050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.574761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.574798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.574809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.575048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.575281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.575298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.575305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.578859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.587865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.588551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.588588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.588599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.588839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.589063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.589071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.589078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.592643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.790 [2024-07-15 13:14:52.601859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.790 [2024-07-15 13:14:52.602543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.790 [2024-07-15 13:14:52.602580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:30.790 [2024-07-15 13:14:52.602590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:30.790 [2024-07-15 13:14:52.602831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:30.790 [2024-07-15 13:14:52.603055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.790 [2024-07-15 13:14:52.603067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.790 [2024-07-15 13:14:52.603075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.790 [2024-07-15 13:14:52.606638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.051 [2024-07-15 13:14:52.615849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.051 [2024-07-15 13:14:52.616551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 13:14:52.616588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.051 [2024-07-15 13:14:52.616599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.051 [2024-07-15 13:14:52.616838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.051 [2024-07-15 13:14:52.617062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.051 [2024-07-15 13:14:52.617070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.051 [2024-07-15 13:14:52.617078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.051 [2024-07-15 13:14:52.620640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.051 [2024-07-15 13:14:52.629646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.051 [2024-07-15 13:14:52.630329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 13:14:52.630368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.051 [2024-07-15 13:14:52.630381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.051 [2024-07-15 13:14:52.630621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.051 [2024-07-15 13:14:52.630844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.051 [2024-07-15 13:14:52.630852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.051 [2024-07-15 13:14:52.630860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.051 [2024-07-15 13:14:52.634422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.051 [2024-07-15 13:14:52.643631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.051 [2024-07-15 13:14:52.644330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 13:14:52.644367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.051 [2024-07-15 13:14:52.644377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.051 [2024-07-15 13:14:52.644617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.051 [2024-07-15 13:14:52.644840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.051 [2024-07-15 13:14:52.644848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.051 [2024-07-15 13:14:52.644856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.051 [2024-07-15 13:14:52.648415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.051 [2024-07-15 13:14:52.657428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.051 [2024-07-15 13:14:52.658131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 13:14:52.658168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.051 [2024-07-15 13:14:52.658179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.051 [2024-07-15 13:14:52.658429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.051 [2024-07-15 13:14:52.658654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.051 [2024-07-15 13:14:52.658662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.051 [2024-07-15 13:14:52.658670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.051 [2024-07-15 13:14:52.662225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.051 [2024-07-15 13:14:52.671251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.051 [2024-07-15 13:14:52.671839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.051 [2024-07-15 13:14:52.671875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.051 [2024-07-15 13:14:52.671886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.051 [2024-07-15 13:14:52.672125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.051 [2024-07-15 13:14:52.672357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.051 [2024-07-15 13:14:52.672367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.672375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.675928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.685144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.685810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.685847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.685857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.686097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.686330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.686339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.686347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.689903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.699121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.699793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.699830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.699841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.700085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.700317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.700326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.700334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.703884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.713095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.713636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.713674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.713686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.713926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.714150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.714159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.714166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.717731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.726942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.727610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.727647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.727658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.727897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.728120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.728129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.728136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.731698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.740911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.741607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.741644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.741654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.741894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.742117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.742126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.742138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.745698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.754703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.755394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.755431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.755442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.755682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.755905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.755913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.755921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.759481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.768706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.769374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.769411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.769423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.769666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.769889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.769898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.769905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.773468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.782674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.783376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.783421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.783431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.783671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.783894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.783902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.783910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.787473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.796475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.797200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.797244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.797257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.797497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.797721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.797729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.797737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.801297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.810294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.810993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.811030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.811041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.811290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.811514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.052 [2024-07-15 13:14:52.811523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.052 [2024-07-15 13:14:52.811530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.052 [2024-07-15 13:14:52.815082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.052 [2024-07-15 13:14:52.824290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.052 [2024-07-15 13:14:52.824892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.052 [2024-07-15 13:14:52.824910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.052 [2024-07-15 13:14:52.824918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.052 [2024-07-15 13:14:52.825138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.052 [2024-07-15 13:14:52.825365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.053 [2024-07-15 13:14:52.825373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.053 [2024-07-15 13:14:52.825380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.053 [2024-07-15 13:14:52.828924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.053 [2024-07-15 13:14:52.838127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.053 [2024-07-15 13:14:52.838686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 13:14:52.838702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.053 [2024-07-15 13:14:52.838709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.053 [2024-07-15 13:14:52.838928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.053 [2024-07-15 13:14:52.839153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.053 [2024-07-15 13:14:52.839160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.053 [2024-07-15 13:14:52.839167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.053 [2024-07-15 13:14:52.842716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.053 [2024-07-15 13:14:52.851912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.053 [2024-07-15 13:14:52.852462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 13:14:52.852478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.053 [2024-07-15 13:14:52.852485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.053 [2024-07-15 13:14:52.852705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.053 [2024-07-15 13:14:52.852924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.053 [2024-07-15 13:14:52.852932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.053 [2024-07-15 13:14:52.852939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.053 [2024-07-15 13:14:52.856487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.053 [2024-07-15 13:14:52.865897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.053 [2024-07-15 13:14:52.866461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.053 [2024-07-15 13:14:52.866476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.053 [2024-07-15 13:14:52.866483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.053 [2024-07-15 13:14:52.866702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.053 [2024-07-15 13:14:52.866921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.053 [2024-07-15 13:14:52.866929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.053 [2024-07-15 13:14:52.866935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.053 [2024-07-15 13:14:52.870498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.879712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.880400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.880437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.880447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.880687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.880910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.880918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.880926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.884496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.893708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.894324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.894342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.894351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.894571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.894790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.894798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.894805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.898354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.907557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.908217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.908260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.908271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.908511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.908734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.908743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.908750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.912303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.921511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.922195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.922239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.922251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.922491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.922714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.922722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.922730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.926283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.935485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.936124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.936164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.936176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.936424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.936648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.936657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.936664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.940217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.949485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.950160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.950209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.950459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.950683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.950692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.950699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.954253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.963468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.964065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.964102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.964113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.964363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.964587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.964595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.964603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.968162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.977373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.978073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.978110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.978120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.978369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.978597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.978606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.978614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.982164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:52.991162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:52.991844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:52.991881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:52.991892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:52.992131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:52.992364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:52.992373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:52.992381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:52.995934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:53.005147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:53.005823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:53.005860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:53.005871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:53.006111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:53.006342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:53.006353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:53.006360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:53.009936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:53.018957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:53.019466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:53.019485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:53.019493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:53.019713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:53.019933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:53.019941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:53.019948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:53.023503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:53.032933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:53.033470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:53.033486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:53.033494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:53.033713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:53.033932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:53.033941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:53.033948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:53.037499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:53.046922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:53.047610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:53.047647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:53.047658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:53.047897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.314 [2024-07-15 13:14:53.048120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.314 [2024-07-15 13:14:53.048129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.314 [2024-07-15 13:14:53.048137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.314 [2024-07-15 13:14:53.051695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.314 [2024-07-15 13:14:53.060900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.314 [2024-07-15 13:14:53.061601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.314 [2024-07-15 13:14:53.061638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.314 [2024-07-15 13:14:53.061651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.314 [2024-07-15 13:14:53.061892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.062115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.062123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.062131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.065692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.315 [2024-07-15 13:14:53.074797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.315 [2024-07-15 13:14:53.075506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-15 13:14:53.075543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.315 [2024-07-15 13:14:53.075562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.315 [2024-07-15 13:14:53.075801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.076024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.076033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.076040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.079601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.315 [2024-07-15 13:14:53.088601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.315 [2024-07-15 13:14:53.089269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-15 13:14:53.089306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.315 [2024-07-15 13:14:53.089317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.315 [2024-07-15 13:14:53.089557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.089780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.089788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.089796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.093355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.315 [2024-07-15 13:14:53.102564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.315 [2024-07-15 13:14:53.103093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-15 13:14:53.103129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.315 [2024-07-15 13:14:53.103140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.315 [2024-07-15 13:14:53.103393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.103618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.103627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.103634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.107187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.315 [2024-07-15 13:14:53.116402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.315 [2024-07-15 13:14:53.117095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-15 13:14:53.117132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.315 [2024-07-15 13:14:53.117143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.315 [2024-07-15 13:14:53.117392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.117616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.117629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.117637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.121186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.315 [2024-07-15 13:14:53.130403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.315 [2024-07-15 13:14:53.131078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.315 [2024-07-15 13:14:53.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.315 [2024-07-15 13:14:53.131125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.315 [2024-07-15 13:14:53.131375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.315 [2024-07-15 13:14:53.131599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.315 [2024-07-15 13:14:53.131608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.315 [2024-07-15 13:14:53.131615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.315 [2024-07-15 13:14:53.135172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.144403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.145095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.145132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.145142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.145391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.145615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.145624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.145631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.149183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.158394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.159087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.159123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.159134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.159383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.159608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.159616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.159624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.163179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.172207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.172872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.172909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.172919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.173159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.173391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.173401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.173408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.176966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.186187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.186828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.186846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.186855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.187075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.187301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.187310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.187317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.190869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.200072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.200566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.200584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.200591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.200811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.201031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.201039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.201046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.204602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.214028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.214594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.214610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.214617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.214841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.215060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.215068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.215075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.218634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.227845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.228373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.228389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.228397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.228616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.228835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.228843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.228850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.232404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.241825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.242467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.242504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.242515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.242754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.242977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.242985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.242993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.246554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.255767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.576 [2024-07-15 13:14:53.256448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.576 [2024-07-15 13:14:53.256485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.576 [2024-07-15 13:14:53.256496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.576 [2024-07-15 13:14:53.256736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.576 [2024-07-15 13:14:53.256959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.576 [2024-07-15 13:14:53.256967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.576 [2024-07-15 13:14:53.256979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.576 [2024-07-15 13:14:53.260539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.576 [2024-07-15 13:14:53.269760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.270457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.270494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.270505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.270744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.270967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.270976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.270984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.274547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.283757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.284458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.284495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.284505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.284745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.284968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.284977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.284984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.288545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.297549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.298236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.298273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.298284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.298523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.298746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.298754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.298762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.302318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.311527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.312105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.312127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.312135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.312362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.312582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.312591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.312598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.316146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.325355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.325910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.325925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.325932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.326151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.326377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.326385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.326392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.329976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.339182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.339825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.339862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.339873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.340112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.340345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.340354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.340362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.343915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.353122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.353711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.353730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.353738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.353958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.354182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.354190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.354197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.357750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.366952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.367511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.367527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.367535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.367754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.367973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.367981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.367988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.371547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.380748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.381297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.381313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.381320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.381539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.381758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.381766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.381773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.385319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.577 [2024-07-15 13:14:53.394723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.577 [2024-07-15 13:14:53.395336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.577 [2024-07-15 13:14:53.395374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.577 [2024-07-15 13:14:53.395384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.577 [2024-07-15 13:14:53.395625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.577 [2024-07-15 13:14:53.395848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.577 [2024-07-15 13:14:53.395857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.577 [2024-07-15 13:14:53.395864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.577 [2024-07-15 13:14:53.399438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 [2024-07-15 13:14:53.408658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.409195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.409239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.409250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.409490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.409713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.409721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.409729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 [2024-07-15 13:14:53.413286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 879413 Killed "${NVMF_APP[@]}" "$@" 00:29:31.839 [2024-07-15 13:14:53.422491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:31.839 [2024-07-15 13:14:53.423172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.423209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.423222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.423473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:31.839 [2024-07-15 13:14:53.423697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.423707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.423714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.839 [2024-07-15 13:14:53.427272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=880949 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 880949 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 880949 ']' 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:31.839 13:14:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:31.839 [2024-07-15 13:14:53.436501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.437074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.437111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.437122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.437372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.437597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.437606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.437614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 [2024-07-15 13:14:53.441173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 [2024-07-15 13:14:53.450404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.451013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.451031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.451040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.451269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.451489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.451497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.451505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 [2024-07-15 13:14:53.455056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 [2024-07-15 13:14:53.464316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.464963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.465000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.465011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.465260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.465483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.465492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.465500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 [2024-07-15 13:14:53.469063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 [2024-07-15 13:14:53.478288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.478926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.478945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.478957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.479178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.479405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.479413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.839 [2024-07-15 13:14:53.479420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.839 [2024-07-15 13:14:53.482968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.839 [2024-07-15 13:14:53.487575] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:31.839 [2024-07-15 13:14:53.487644] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.839 [2024-07-15 13:14:53.492179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.839 [2024-07-15 13:14:53.492889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.839 [2024-07-15 13:14:53.492928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.839 [2024-07-15 13:14:53.492939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.839 [2024-07-15 13:14:53.493179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.839 [2024-07-15 13:14:53.493411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.839 [2024-07-15 13:14:53.493421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.493429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.496983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.505990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.506707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.506745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.506756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.506996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.507220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.507239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.507247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.510801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.519811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.520508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.520547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.520558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.520802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.521026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.521036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.521044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.524684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.840 [2024-07-15 13:14:53.533721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.534492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.534531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.534543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.534784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.535008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.535017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.535026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.538598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.547610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.548225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.548270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.548281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.548521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.548746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.548756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.548764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.552324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.561538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.562274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.562312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.562324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.562568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.562792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.562806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.562813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.566379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.575406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.575971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.575990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.575999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.576219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.576445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.576454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.576461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.579686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:31.840 [2024-07-15 13:14:53.580007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.589218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.589930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.589969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.589980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.590221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.590455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.590466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.590474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.594027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.603049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.603727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.603765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.603776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.604016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.604249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.604260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.604268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.607831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.616862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.617477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.617497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.617506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.617727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.617947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.617956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.617963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.621522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.840 [2024-07-15 13:14:53.630740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.840 [2024-07-15 13:14:53.631457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.840 [2024-07-15 13:14:53.631496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.840 [2024-07-15 13:14:53.631507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.840 [2024-07-15 13:14:53.631748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.840 [2024-07-15 13:14:53.631972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.840 [2024-07-15 13:14:53.631981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.840 [2024-07-15 13:14:53.631989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.840 [2024-07-15 13:14:53.633445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.840 [2024-07-15 13:14:53.633469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.840 [2024-07-15 13:14:53.633475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.840 [2024-07-15 13:14:53.633480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.840 [2024-07-15 13:14:53.633484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.840 [2024-07-15 13:14:53.633584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.840 [2024-07-15 13:14:53.633740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.840 [2024-07-15 13:14:53.633742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.841 [2024-07-15 13:14:53.635552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.841 [2024-07-15 13:14:53.644566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.841 [2024-07-15 13:14:53.645148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.841 [2024-07-15 13:14:53.645188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.841 [2024-07-15 13:14:53.645200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.841 [2024-07-15 13:14:53.645449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.841 [2024-07-15 13:14:53.645675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.841 [2024-07-15 13:14:53.645690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.841 [2024-07-15 13:14:53.645698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:31.841 [2024-07-15 13:14:53.649257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:31.841 [2024-07-15 13:14:53.658481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:31.841 [2024-07-15 13:14:53.659126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.841 [2024-07-15 13:14:53.659145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:31.841 [2024-07-15 13:14:53.659153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:31.841 [2024-07-15 13:14:53.659379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:31.841 [2024-07-15 13:14:53.659601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:31.841 [2024-07-15 13:14:53.659611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:31.841 [2024-07-15 13:14:53.659619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.663164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.672400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.672890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.672907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.672916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.673137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.673365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.673378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.673385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.676931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.686357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.687051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.687092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.687104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.687356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.687582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.687592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.687601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.691156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.700178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.700640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.700660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.700669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.700890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.701111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.701121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.701129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.704684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.714107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.714684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.714700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.714708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.714928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.715148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.715157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.715164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.718715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.727927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.728602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.728642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.728653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.728893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.729116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.729127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.729135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.732698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.741921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.742620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.742659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.742670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.742914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.743138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.743148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.743156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.746718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.102 [2024-07-15 13:14:53.755728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.102 [2024-07-15 13:14:53.756480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.102 [2024-07-15 13:14:53.756519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.102 [2024-07-15 13:14:53.756530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.102 [2024-07-15 13:14:53.756770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.102 [2024-07-15 13:14:53.756995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.102 [2024-07-15 13:14:53.757006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.102 [2024-07-15 13:14:53.757014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.102 [2024-07-15 13:14:53.760572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.769589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.770304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.770343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.770355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.770597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.770821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.770831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.770839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.774402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.783480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.784203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.784249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.784262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.784504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.784728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.784738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.784754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.788313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.797317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.797892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.797911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.797919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.798140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.798367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.798377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.798384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.801931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.811141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.811851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.811890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.811901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.812141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.812375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.812385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.812393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.815944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.824947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.825536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.825557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.825565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.825786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.826007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.826016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.826023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.829576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.838786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.839524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.839562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.839573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.839813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.840037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.840046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.840054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.843621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.852635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.853305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.853344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.853356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.853600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.853824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.853834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.853842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.857406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.866618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.867275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.867325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.867567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.867791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.867800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.867808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.871384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.880602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.881368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.881406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.881417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.881662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.881887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.881897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.881904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.885463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.894472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.895206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.895265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.895505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.895729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.103 [2024-07-15 13:14:53.895739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.103 [2024-07-15 13:14:53.895747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.103 [2024-07-15 13:14:53.899306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.103 [2024-07-15 13:14:53.908315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.103 [2024-07-15 13:14:53.908882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.103 [2024-07-15 13:14:53.908920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.103 [2024-07-15 13:14:53.908931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.103 [2024-07-15 13:14:53.909171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.103 [2024-07-15 13:14:53.909403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.104 [2024-07-15 13:14:53.909413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.104 [2024-07-15 13:14:53.909421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.104 [2024-07-15 13:14:53.912973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.104 [2024-07-15 13:14:53.922194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.104 [2024-07-15 13:14:53.922904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.104 [2024-07-15 13:14:53.922942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.104 [2024-07-15 13:14:53.922953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.104 [2024-07-15 13:14:53.923194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.104 [2024-07-15 13:14:53.923426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.104 [2024-07-15 13:14:53.923436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.104 [2024-07-15 13:14:53.923449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.927007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:53.936017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:53.936625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:53.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:53.936653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:53.936873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:53.937093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:53.937102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:53.937109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.940662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:53.949876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:53.950564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:53.950603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:53.950614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:53.950853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:53.951077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:53.951087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:53.951095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.954657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:53.963875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:53.964570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:53.964608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:53.964619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:53.964859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:53.965083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:53.965092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:53.965101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.968662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:53.977679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:53.978490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:53.978532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:53.978544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:53.978783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:53.979008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:53.979017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:53.979025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.982583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:53.991594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:53.992307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:53.992345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:53.992358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:53.992599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:53.992824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:53.992833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:53.992841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:53.996402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.005408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:54.006063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:54.006101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:54.006112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:54.006359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:54.006584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:54.006593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:54.006601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:54.010155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.019373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:54.019829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:54.019848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:54.019856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:54.020076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:54.020308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:54.020320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:54.020327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:54.023877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.033398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:54.033967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:54.033983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:54.033991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:54.034210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:54.034436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:54.034445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:54.034453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:54.037999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.047206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:54.047822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:54.047838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:54.047846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:54.048065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:54.048291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:54.048302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:54.048309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:54.051857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.061071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.367 [2024-07-15 13:14:54.061613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.367 [2024-07-15 13:14:54.061652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.367 [2024-07-15 13:14:54.061664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.367 [2024-07-15 13:14:54.061905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.367 [2024-07-15 13:14:54.062129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.367 [2024-07-15 13:14:54.062138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.367 [2024-07-15 13:14:54.062146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.367 [2024-07-15 13:14:54.065708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.367 [2024-07-15 13:14:54.074935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.075390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.075412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.075420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.075642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.075863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.075872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.075880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.079434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.088854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.089534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.089572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.089583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.089823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.090047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.090057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.090065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.093627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.102731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.103353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.103372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.103380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.103601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.103822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.103831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.103839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.107395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.116604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.117208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.117224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.117242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.117462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.117682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.117691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.117699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.121250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.130461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.131144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.131183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.131194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.131440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.131665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.131675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.131683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.135240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.144460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.145177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.145215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.145227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.145476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.145700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.145710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.145718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.149276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.158287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.159002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.159041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.159052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.159300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.159525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.159538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.159546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.163102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.172124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.172825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.172863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.172875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.173116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.173349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.173360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.173368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.368 [2024-07-15 13:14:54.176921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.368 [2024-07-15 13:14:54.185935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.368 [2024-07-15 13:14:54.186649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.368 [2024-07-15 13:14:54.186687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.368 [2024-07-15 13:14:54.186699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.368 [2024-07-15 13:14:54.186939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.368 [2024-07-15 13:14:54.187164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.368 [2024-07-15 13:14:54.187174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.368 [2024-07-15 13:14:54.187181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.629 [2024-07-15 13:14:54.190744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.629 [2024-07-15 13:14:54.199766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.629 [2024-07-15 13:14:54.200300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-07-15 13:14:54.200321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.629 [2024-07-15 13:14:54.200330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.629 [2024-07-15 13:14:54.200551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.629 [2024-07-15 13:14:54.200772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.629 [2024-07-15 13:14:54.200783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.629 [2024-07-15 13:14:54.200790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.629 [2024-07-15 13:14:54.204342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.629 [2024-07-15 13:14:54.213770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.629 [2024-07-15 13:14:54.214470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-07-15 13:14:54.214509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.629 [2024-07-15 13:14:54.214520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.629 [2024-07-15 13:14:54.214760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.629 [2024-07-15 13:14:54.214984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.629 [2024-07-15 13:14:54.214993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.629 [2024-07-15 13:14:54.215002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.629 [2024-07-15 13:14:54.218565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.629 [2024-07-15 13:14:54.227576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.629 [2024-07-15 13:14:54.228263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-07-15 13:14:54.228301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.629 [2024-07-15 13:14:54.228312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.629 [2024-07-15 13:14:54.228552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.629 [2024-07-15 13:14:54.228776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.629 [2024-07-15 13:14:54.228786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.629 [2024-07-15 13:14:54.228794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.629 [2024-07-15 13:14:54.232357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.629 [2024-07-15 13:14:54.241572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.629 [2024-07-15 13:14:54.242130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-07-15 13:14:54.242169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.629 [2024-07-15 13:14:54.242181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.629 [2024-07-15 13:14:54.242430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.629 [2024-07-15 13:14:54.242655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.629 [2024-07-15 13:14:54.242665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.629 [2024-07-15 13:14:54.242673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.629 [2024-07-15 13:14:54.246222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.629 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:32.629 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:32.629 13:14:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:32.629 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:32.629 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.629 [2024-07-15 13:14:54.255446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.629 [2024-07-15 13:14:54.256067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.629 [2024-07-15 13:14:54.256086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.256095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.256322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.256543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.256552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.256559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 [2024-07-15 13:14:54.260108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 [2024-07-15 13:14:54.269335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.269828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.269866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.269877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.270117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.270349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.270360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.270369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 [2024-07-15 13:14:54.273921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 [2024-07-15 13:14:54.283142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.283883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.283922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.283933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.284173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.284405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.284416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.284424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 [2024-07-15 13:14:54.287978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 [2024-07-15 13:14:54.296389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.630 [2024-07-15 13:14:54.296992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.297669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.297707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.297718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.297959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.298182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.298193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.298201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.630 [2024-07-15 13:14:54.301760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 [2024-07-15 13:14:54.310978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.311703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.311743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.311754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.311995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.312220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.312238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.312247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 [2024-07-15 13:14:54.315800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 [2024-07-15 13:14:54.324804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.325368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.325406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.325417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.325657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.325881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.325890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.325898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 [2024-07-15 13:14:54.329458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 Malloc0 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 [2024-07-15 13:14:54.338673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.339248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.339286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.339297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.339537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.339761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.339770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.339778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.630 [2024-07-15 13:14:54.343338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 [2024-07-15 13:14:54.352552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 [2024-07-15 13:14:54.353285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.630 [2024-07-15 13:14:54.353323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2127540 with addr=10.0.0.2, port=4420 00:29:32.630 [2024-07-15 13:14:54.353335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127540 is same with the state(5) to be set 00:29:32.630 [2024-07-15 13:14:54.353579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2127540 (9): Bad file descriptor 00:29:32.630 [2024-07-15 13:14:54.353803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.630 [2024-07-15 13:14:54.353813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.630 [2024-07-15 13:14:54.353821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.630 [2024-07-15 13:14:54.357379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.630 [2024-07-15 13:14:54.361897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.630 [2024-07-15 13:14:54.366385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.630 13:14:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 879927 00:29:32.630 [2024-07-15 13:14:54.412553] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:42.620 00:29:42.620 Latency(us) 00:29:42.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.620 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.620 Verification LBA range: start 0x0 length 0x4000 00:29:42.620 Nvme1n1 : 15.01 8118.51 31.71 9688.80 0.00 7162.09 556.37 15510.19 00:29:42.620 =================================================================================================================== 00:29:42.620 Total : 8118.51 31.71 9688.80 0.00 7162.09 556.37 15510.19 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:42.620 rmmod nvme_tcp 00:29:42.620 rmmod nvme_fabrics 00:29:42.620 rmmod nvme_keyring 00:29:42.620 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 880949 ']' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 880949 ']' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880949' 00:29:42.621 killing process with pid 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 880949 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.621 13:15:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.004 13:15:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.004 00:29:44.004 real 0m28.898s 00:29:44.004 user 1m3.216s 00:29:44.004 sys 0m7.891s 00:29:44.004 13:15:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.004 13:15:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 ************************************ 00:29:44.004 END TEST nvmf_bdevperf 00:29:44.004 ************************************ 00:29:44.004 13:15:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:44.004 13:15:05 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.004 13:15:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:44.004 13:15:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.004 13:15:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.004 ************************************ 00:29:44.004 START TEST nvmf_target_disconnect 00:29:44.004 ************************************ 00:29:44.004 13:15:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.004 * Looking for test storage... 00:29:44.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.004 13:15:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.004 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:44.005 13:15:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.147 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:52.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:52.148 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:52.148 Found net devices under 0000:31:00.0: cvl_0_0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:52.148 Found net devices under 0000:31:00.1: cvl_0_1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:29:52.148 00:29:52.148 --- 10.0.0.2 ping statistics --- 00:29:52.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.148 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:29:52.148 00:29:52.148 --- 10.0.0.1 ping statistics --- 00:29:52.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.148 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.148 ************************************ 00:29:52.148 START TEST nvmf_target_disconnect_tc1 00:29:52.148 ************************************ 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.148 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.149 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.149 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:52.149 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.149 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:52.149 13:15:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.409 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.409 [2024-07-15 13:15:14.052707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.409 [2024-07-15 13:15:14.052764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95e4b0 with addr=10.0.0.2, port=4420 00:29:52.409 [2024-07-15 13:15:14.052790] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:52.409 [2024-07-15 13:15:14.052802] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.409 [2024-07-15 13:15:14.052810] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:52.409 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:52.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:52.409 Initializing NVMe Controllers 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:52.409 00:29:52.409 real 0m0.119s 00:29:52.409 user 0m0.049s 00:29:52.409 sys 0m0.069s 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:52.409 ************************************ 00:29:52.409 END TEST nvmf_target_disconnect_tc1 00:29:52.409 ************************************ 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.409 ************************************ 00:29:52.409 START TEST nvmf_target_disconnect_tc2 00:29:52.409 ************************************ 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=888213 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 888213 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 888213 ']' 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.409 13:15:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.409 [2024-07-15 13:15:14.213053] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:52.409 [2024-07-15 13:15:14.213111] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.669 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.669 [2024-07-15 13:15:14.309759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.669 [2024-07-15 13:15:14.403486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.669 [2024-07-15 13:15:14.403548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.669 [2024-07-15 13:15:14.403557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.669 [2024-07-15 13:15:14.403564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.669 [2024-07-15 13:15:14.403570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.669 [2024-07-15 13:15:14.403744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.669 [2024-07-15 13:15:14.403901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.669 [2024-07-15 13:15:14.404061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.669 [2024-07-15 13:15:14.404063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.240 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 Malloc0 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 [2024-07-15 13:15:15.082440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 [2024-07-15 13:15:15.122762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=888359 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.501 13:15:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:53.501 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.420 13:15:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 888213 00:29:55.420 13:15:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Write completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 Read completed with error (sct=0, sc=8) 00:29:55.420 starting I/O failed 00:29:55.420 [2024-07-15 13:15:17.156597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.420 [2024-07-15 13:15:17.156988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-07-15 13:15:17.157011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-07-15 13:15:17.157223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-07-15 13:15:17.157248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-07-15 13:15:17.157757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-07-15 13:15:17.157794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-07-15 13:15:17.157998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-07-15 13:15:17.158012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-07-15 13:15:17.158488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.420 [2024-07-15 13:15:17.158524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.420 qpair failed and we were unable to recover it. 00:29:55.420 [2024-07-15 13:15:17.158925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.158939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.159134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.159145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.159646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.159686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.160077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.160090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.160551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.160587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.160947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.160961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.161493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.161530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.161753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.161767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.161936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.161947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.162270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.162282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.162612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.162623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.162966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.162977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.163338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.163350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.163647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.163658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.163976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.163987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.164363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.164374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.164740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.164751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.165073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.165084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.165430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.165442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.165791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.165802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.166159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.166170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.166449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.166460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.166760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.166771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.166974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.166985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.167338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.167688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.167699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.168176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.168186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.168538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.168549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.168868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.168879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.169086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.169097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.169397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.169407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.169740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.169751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.170003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.170013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.170387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.170397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.170809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.170819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.171201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.171211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.171468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.171478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.171849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.171859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.172209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.172219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.172607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.172617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.421 [2024-07-15 13:15:17.172879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.421 [2024-07-15 13:15:17.172889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.421 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.173249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.173259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.173614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.173630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.173950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.173961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.174310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.174321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.174586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.174596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.174930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.174940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.175302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.175313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.175507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.175517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.175841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.175852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.176171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.176182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.176537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.176548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.176855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.176866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.177252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.177265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.177605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.177617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.177973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.177985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.178348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.178361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.178599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.178612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.178923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.178936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.179276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.179289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.179609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.179620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.179959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.179971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.180319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.180332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.180704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.180716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.181042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.181054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.181423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.181436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.181759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.181771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.182024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.182035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.182373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.182386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.182722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.182734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.183048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.183061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.183260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.183273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.183643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.183655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.184013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.184025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.184388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.184400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.184733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.184745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.185082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.185095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.185315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.185327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.185652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.185664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.185972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.185984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.186281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.186293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.422 qpair failed and we were unable to recover it. 00:29:55.422 [2024-07-15 13:15:17.186632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.422 [2024-07-15 13:15:17.186643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.186979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.186994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.187323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.187337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.187639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.187651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.187892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.187903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.188250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.188262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.188600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.188612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.188924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.188936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.189290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.189302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.189666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.189678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.190036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.190049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.190306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.190318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.190643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.190655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.190985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.190997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.191294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.191306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.191737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.191749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.192082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.192094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.192487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.192504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.192874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.192890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.193266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.193283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.193654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.193670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.194002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.194019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.194387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.194402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.194724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.194739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.195095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.195111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.195506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.195521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.195728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.195743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.195967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.195982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.196389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.196405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.196786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.196801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.197160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.197176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.197443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.197460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.197652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.197666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.198009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.198026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.198378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.198394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.198761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.198777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.199022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.199038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.199271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.199287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.199699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.199714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.199957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.199972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.200350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.200366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.423 qpair failed and we were unable to recover it. 00:29:55.423 [2024-07-15 13:15:17.200606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.423 [2024-07-15 13:15:17.200624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.200972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.200988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.201330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.201346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.201664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.201679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.201976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.201991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.202341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.202357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.202740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.202756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.203088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.203103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.203363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.203378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.203768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.203783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.204142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.204157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.204500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.204516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.204851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.204868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.205138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.205153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.205509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.205525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.205869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.206245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.206267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.206636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.206656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.207008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.207029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.207293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.207315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.207705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.207725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.208099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.208120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.208433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.208456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.208843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.208864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.209238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.209259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.209630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.209651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.210127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.210148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.210413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.210437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.210800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.210821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.211212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.211241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.211620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.211641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.212031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.212052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.424 [2024-07-15 13:15:17.212287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.424 [2024-07-15 13:15:17.212308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.424 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.212672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.212693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.213050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.213071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.213460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.213481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.213906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.213927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.214325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.214348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.214581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.214603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.214905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.214927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.215311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.215338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.215692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.215712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.216045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.216066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.216448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.216471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.216827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.216847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.217093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.217115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.217485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.217506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.217881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.218261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.218282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.218520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.218540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.218925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.218945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.219291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.219321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.219703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.219730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.219988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.220020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.220274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.220306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.220694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.220721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.221099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.221402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.221431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.221697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.221725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.222103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.222131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.222516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.222547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.222894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.222922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.223283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.223334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.223634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.223662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.224023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.224050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.224429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.224857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.224885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.225266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.225295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.225695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.225723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.226026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.226054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.226411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.226442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.226822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.226850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.227224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.227263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.425 qpair failed and we were unable to recover it. 00:29:55.425 [2024-07-15 13:15:17.227518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.425 [2024-07-15 13:15:17.227546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.227919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.227947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.228309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.228338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.228713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.228741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.229106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.229133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.229561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.229590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.229950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.229979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.230347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.230777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.230806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.231246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.231276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.231647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.231675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.231921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.231950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.232314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.232342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.232734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.232761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.233139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.233167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.233527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.233556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.233934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.233962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.234363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.234393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.234769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.234798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.235143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.235171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.235483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.235512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.235780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.235807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.236191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.236586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.236614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.236970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.236999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.237264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.237293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.237716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.237744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.238110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.238138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.238509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.238539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.238957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.238985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.239333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.239362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.239690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.239717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.240058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.240087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.240376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.240404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.240795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.240823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.426 [2024-07-15 13:15:17.241164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.426 [2024-07-15 13:15:17.241192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.426 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.241568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.241599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.241937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.241965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.242345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.242375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.242754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.242782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.243188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.243215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.243739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.243769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.244131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.244158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.244527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.244556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.244960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.244988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.245374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.704 [2024-07-15 13:15:17.245404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.704 qpair failed and we were unable to recover it. 00:29:55.704 [2024-07-15 13:15:17.245808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.245836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.246221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.246274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.246688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.246717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.247078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.247106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.247512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.247541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.247822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.247851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.248146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.248173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.248553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.248583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.248946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.248974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.249254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.249284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.249709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.249736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.250105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.250133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.250502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.250531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.250827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.250855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.251241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.251271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.251601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.251629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.251961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.251991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.252333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.252362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.252726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.252753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.253103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.253131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.253517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.253546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.253964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.253993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.254259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.254289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.254703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.254731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.255112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.255140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.255516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.255545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.255911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.255939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.256352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.256382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.256767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.256795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.257117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.257147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.257542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.257571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.257954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.257982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.258271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.258300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.258628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.258656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.259054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.259082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.259451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.259479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.259858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.259886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.260198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.260225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.260588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.260616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.260993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.705 [2024-07-15 13:15:17.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.705 qpair failed and we were unable to recover it. 00:29:55.705 [2024-07-15 13:15:17.261447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.261477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.261719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.261753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.262113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.262141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.262552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.262583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.262950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.262978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.263379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.263408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.263785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.263813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.264182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.264209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.264602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.264631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.264839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.264867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.265265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.265297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.265661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.265688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.266127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.266154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.266395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.266426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.266810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.266838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.267222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.267259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.267510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.267539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.267894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.267922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.268309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.268338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.268711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.268738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.269196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.269225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.269575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.269603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.269973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.270001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.270278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.270309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.270699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.270727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.271146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.271174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.271565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.271594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.271862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.271890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.272155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.272183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.272559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.272588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.272850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.272881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.273119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.273148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.273601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.273630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.273920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.274336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.274366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.274754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.274781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.275172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.275199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.275501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.275530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.275922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.275949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.276336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.276364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.706 [2024-07-15 13:15:17.276708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.706 [2024-07-15 13:15:17.276736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.706 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.277167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.277201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.277633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.277662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.278030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.278058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.278324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.278353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.278749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.278777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.279117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.279144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.279602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.279631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.279990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.280017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.280319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.280347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.280714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.280743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.281088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.281116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.281443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.281472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.281822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.281851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.282244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.282274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.282612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.282640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.283020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.283048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.283319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.283347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.283719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.283748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.284117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.284145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.284442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.284469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.284830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.284858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.285245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.285275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.285660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.285688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.286111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.286138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.286509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.286538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.286913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.286941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.287261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.287290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.287651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.287680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.288015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.288043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.288436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.288465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.288817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.288845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.289112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.289141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.289608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.289637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.290054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.290083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.290370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.290399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.290793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.290822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.291142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.291169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.291486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.291518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.291980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.292008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.707 [2024-07-15 13:15:17.292297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.707 [2024-07-15 13:15:17.292327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.707 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.292717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.292750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.293117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.293144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.293370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.293398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.293666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.293697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.294056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.294084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.294375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.294403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.294793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.294822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.295187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.295215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.295510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.295539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.295914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.295942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.296323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.296352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.296644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.296672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.297060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.297087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.297469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.297499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.297834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.297864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.298221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.298260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.298646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.298674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.298883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.298910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.299252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.299649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.299677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.300037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.300064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.300337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.300366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.300621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.300651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.301016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.301043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.301379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.301409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.301788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.301816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.302199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.302227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.302513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.302545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.302915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.302945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.303295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.303324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.303710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.303738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.304164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.304192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.304647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.304677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.305058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.305086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.305489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.305519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.305772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.305798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.306162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.306190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.708 [2024-07-15 13:15:17.306572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.708 [2024-07-15 13:15:17.306602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.708 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.306999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.307027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.307533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.307562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.307920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.307948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.308337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.308366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.308740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.308767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.309139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.309167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.309539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.309568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.309945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.309973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.310306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.310335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.310706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.310734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.311086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.311114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.311513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.311542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.311905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.311933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.312314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.312343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.312705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.312732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.313093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.313121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.313527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.313557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.313815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.313842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.314194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.314221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.314508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.314537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.314945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.314973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.315357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.315387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.315742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.315770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.316030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.316058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.316433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.316461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.316845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.316873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.317260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.317289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.317687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.317715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.318113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.318140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.318547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.318582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.318950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.318979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.319343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.319372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.319632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.319659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.320044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.320072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.320472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.709 [2024-07-15 13:15:17.320501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.709 qpair failed and we were unable to recover it. 00:29:55.709 [2024-07-15 13:15:17.320885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.320912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.321284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.321314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.321698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.321726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.322102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.322130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.322483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.322513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.322879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.322909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.323295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.323324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.323582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.323611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.323910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.323938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.324316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.324344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.324774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.324802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.325184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.325212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.325474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.325504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.325834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.325862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.326262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.326292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.326744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.326773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.327190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.327218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.327668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.327698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.328084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.328113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.328487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.328517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.328882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.329284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.329314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.329766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.329795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.330156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.330184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.330561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.330591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.330983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.331012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.331301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.331331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.331726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.331754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.332118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.332147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.332597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.332626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.332925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.332953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.333321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.333350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.333742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.333770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.334142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.334170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.334562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.334596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.334950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.334979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.335295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.335325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.335696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.335724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.336140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.336168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.336537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.336567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.336954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-15 13:15:17.336983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.710 qpair failed and we were unable to recover it. 00:29:55.710 [2024-07-15 13:15:17.337309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.337339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.337712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.337741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.338103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.338132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.338574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.338603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.338970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.338999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.339396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.339426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.339782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.339811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.340240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.340270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.340659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.340687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.340951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.340980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.341359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.341388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.341796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.341826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.342123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.342151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.342440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.342470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.342844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.342873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.343252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.343282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.343719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.343747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.344098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.344127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.344535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.344565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.344926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.344955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.345318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.345348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.345727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.345756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.346113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.346141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.346533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.346562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.346923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.346952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.347336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.347366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.347763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.347791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.348152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.348181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.348571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.348601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.348992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.349020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.349376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.349406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.349766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.349796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.350150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.350178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.350474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.350837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.350865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.351224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.351264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.351673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.351701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.352086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.352114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.352382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.352416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.352772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.352801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.353209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-15 13:15:17.353249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.711 qpair failed and we were unable to recover it. 00:29:55.711 [2024-07-15 13:15:17.353538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.353569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.353962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.353992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.354312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.354342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.354593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.354624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.355048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.355076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.355478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.355507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.355912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.355942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.356209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.356252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.356575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.356604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.356977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.357005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.357388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.357419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.357773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.357801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.358185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.358213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.358606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.358635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.358988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.359016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.359330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.359360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.359749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.359777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.360164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.360192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.360546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.360576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.360954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.360983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.361350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.361380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.361728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.361756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.362125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.362152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.362421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.362450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.362837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.362866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.363281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.363310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.363713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.363740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.364113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.364141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.364518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.364547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.364995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.365023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.365437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.365467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.365896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.365924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.366258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.366293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.366681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.366709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.367111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.367139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.367515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.367543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.367920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.367948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.368350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.368378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.368747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.368775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.369142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.369171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.712 [2024-07-15 13:15:17.369460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-15 13:15:17.369492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.712 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.369854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.369883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.370328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.370357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.370744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.370772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.371147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.371174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.371476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.371505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.371871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.371901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.372270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.372299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.372701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.372729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.373129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.373156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.373340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.373369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.373669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.373697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.374077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.374105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.374518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.374547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.374932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.374960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.375326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.375356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.375744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.375772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.376155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.376183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.376560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.376590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.376839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.376868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.377266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.377296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.377669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.377698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.378063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.378092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.378366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.378394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.378787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.378815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.379186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.379213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.379620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.379649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.380029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.380057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.380321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.380349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.380715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.380743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.381098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.381125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.381514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.381543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.381892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.381925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.382269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.382298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.382685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.382713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.383076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.383104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.383521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.383550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.713 qpair failed and we were unable to recover it. 00:29:55.713 [2024-07-15 13:15:17.383913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.713 [2024-07-15 13:15:17.383941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.384299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.384328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.384704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.384731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.385110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.385139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.385389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.385775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.385803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.386172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.386200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.386601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.386630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.387016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.387045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.387423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.387453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.387816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.387843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.388190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.388218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.388616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.388644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.389011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.389039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.389405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.389435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.389770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.389798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.390176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.390204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.390590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.390619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.390985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.391012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.391379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.391408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.391720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.391748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.391922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.391950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.392278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.392309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.392694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.392721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.393145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.393172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.393552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.393582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.393946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.393973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.394358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.394387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.394743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.394771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.395223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.395283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.395568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.395596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.395931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.395959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.396194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.396223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.396554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.396584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.396943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.396971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.397347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.397381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.397763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.397791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.398158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.398186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.398485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.398515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.398898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.398925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.399323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.399353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.399608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.399635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.399988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.400015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.400377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.400406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.400636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.400665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.401025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.401053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.401495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.401524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.401765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.714 [2024-07-15 13:15:17.401794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.714 qpair failed and we were unable to recover it. 00:29:55.714 [2024-07-15 13:15:17.402180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.402208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.402599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.402629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.402993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.403020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.403390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.403420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.403737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.403765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.404157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.404185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.404584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.404613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.404992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.405020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.405383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.405412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.405784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.405812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.406251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.406280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.406679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.406707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.407084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.407124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.407559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.407614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.408043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.408097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.408545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.408593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.409005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.409055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.409349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.409402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.409838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.409888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.410345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.410397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.410812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.410863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.411280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.411335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.411755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.411790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.412183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.412245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.412684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.412736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.413158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.413207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.413562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.413612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.414026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.414086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.414530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.414582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.414963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.415012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.415488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.415541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.415876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.415928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.416395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.416449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.416874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.416925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.417344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.417394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.417813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.417863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.418331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.418384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.418679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.418732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.419140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.419188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.419680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.419731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.420143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.420175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.420595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.420627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.420982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.421031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.421451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.421502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.421956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.422006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.422326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.715 qpair failed and we were unable to recover it. 00:29:55.715 [2024-07-15 13:15:17.422742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.715 [2024-07-15 13:15:17.422771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.423117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.423166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.423586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.423636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.424054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.424103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.424496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.424912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.424941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.425323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.425375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.425761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.425810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.426219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.426281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.426614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.426665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.427084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.427134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.427522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.427576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.428003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.428036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.428415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.428445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.428806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.428850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.429259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.429306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.429578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.429623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.430051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.430097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.430524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.430571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.430954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.431000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.431440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.431487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.431894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.431948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.432356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.432402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.432689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.432736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.433167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.433212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.433606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.433652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.434063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.434522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.434569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.434985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.435443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.435490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.435897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.435943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.436357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.436402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.436793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.436838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.437157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.437203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.437620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.437665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.438050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.438096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.438509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.438555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.438970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.439017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.439312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.439344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.439715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.439740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.440112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.440155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.440578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.440624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.441027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.441072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.441478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.441525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.441914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.441949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.442321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.442357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.442772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.716 [2024-07-15 13:15:17.442807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.716 qpair failed and we were unable to recover it. 00:29:55.716 [2024-07-15 13:15:17.443204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.443258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.443625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.443659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.444038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.444074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.444447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.444483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.444853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.444887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.445327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.445363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.445761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.445796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.446180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.446213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.446599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.446623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.446995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.447015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.447342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.447364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.447723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.447757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.448162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.448197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.448601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.448636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.449023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.449066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.449483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.449519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.449930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.449965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.450371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.450407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.450812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.450846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.451221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.451266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.451625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.451659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.452024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.452059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.452434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.452469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.452859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.452894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.453307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.453343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.453761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.453795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.454194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.454213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.454453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.454469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.454680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.454696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.455067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.455093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.455484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.455512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.455775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.455802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.456173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.456199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.717 qpair failed and we were unable to recover it. 00:29:55.717 [2024-07-15 13:15:17.456561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.717 [2024-07-15 13:15:17.456589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.456878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.456904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.457256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.457283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.457674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.457701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.458081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.458106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.458482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.458510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.458860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.458885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.459258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.459286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.459681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.459708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.460071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.460090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.460321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.460336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.460681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.460696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.460940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.460955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.461314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.461341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.461702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.461729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.462082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.462109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.462591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.462610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.462955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.462970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.463309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.463336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.463726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.463753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.464109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.464135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.464534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.464956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.464969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.465312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.465336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.465742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.465765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.466161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.466181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.466431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.466444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.466807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.466820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.467204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.467224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.467593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.467615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.467963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.467986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.468321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.468338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.468710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.468723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.469096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.469109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.469472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.469495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.469867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.469891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.470222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.470250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.470612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.470626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.470983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.470996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.471338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.471363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.718 [2024-07-15 13:15:17.471744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.718 [2024-07-15 13:15:17.471760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.718 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.472091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.472105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.472473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.472495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.472872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.472895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.473262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.473284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.473634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.473657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.474008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.474031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.474397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.474420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.474798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.474822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.475162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.475180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.475528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.475551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.475913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.475930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.476213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.476225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.476601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.476614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.476932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.476945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.477287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.477301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.477659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.477672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.478036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.478052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.478401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.478416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.478770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.479139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.479153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.479506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.479526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.479899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.479914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.480144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.480158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.480364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.480381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.480766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.480781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.481115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.481130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.481328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.481344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.481744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.481758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.482105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.482120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.482435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.482450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.482801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.482816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.483207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.483222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.483567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.483583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.483916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.483932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.484125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.484141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.484500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.484517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.484847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.484863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.485162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.485178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.485519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.485536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.719 qpair failed and we were unable to recover it. 00:29:55.719 [2024-07-15 13:15:17.485883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.719 [2024-07-15 13:15:17.485899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.486225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.486246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.486619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.486634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.486976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.486991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.487347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.487364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.487574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.487590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.487947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.487962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.488124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.488140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.488373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70800 is same with the state(5) to be set 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Read completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 Write completed with error (sct=0, sc=8) 00:29:55.720 starting I/O failed 00:29:55.720 [2024-07-15 13:15:17.489286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.720 [2024-07-15 13:15:17.489881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.489982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe160000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.490510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.490602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe160000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.491701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.491743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.492124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.492148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.492391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.492413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.492766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.492788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.493185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.493208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.493597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.493621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.494008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.494030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.494389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.494412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.494761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.494784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.495168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.495191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.495555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.495579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.495945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.495967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.496356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.496377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.496749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.496770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.497164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.497185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.498426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.498471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.498859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.498881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.499214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.499246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.499488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.720 [2024-07-15 13:15:17.499511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.720 qpair failed and we were unable to recover it. 00:29:55.720 [2024-07-15 13:15:17.499863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.499885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.500271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.500294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.500546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.500569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.500801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.500823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.501223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.501254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.501624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.501645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.502028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.502057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.502445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.502474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.502885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.502913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.503300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.503329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.503710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.503738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.504106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.504135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.504492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.504876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.504905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.505272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.505302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.505713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.505741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.506087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.506117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.506437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.506467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.506850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.506879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.507224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.507261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.507544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.507572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.507929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.507958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.508340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.508372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.508800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.508828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.509219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.509258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.509676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.509712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.510008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.510037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.510434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.510465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.510845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.510873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.511246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.511695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.511724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.512098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.721 [2024-07-15 13:15:17.512127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.721 qpair failed and we were unable to recover it. 00:29:55.721 [2024-07-15 13:15:17.512513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-07-15 13:15:17.512543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-07-15 13:15:17.512923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-07-15 13:15:17.512951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.722 [2024-07-15 13:15:17.513329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.722 [2024-07-15 13:15:17.513359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.722 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.513695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.513724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.514137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.514550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.514580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.514942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.514970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.515350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.515381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.515751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.515780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-07-15 13:15:17.516147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-07-15 13:15:17.516176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.516529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.516559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.516933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.516962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.517334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.517365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.517748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.517777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.518127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.518155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.518609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.518640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.519064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.519094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.519452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.519481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.519855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.519883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.520151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.520181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.520602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.520633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.521000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.521028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.521414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.521444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.521824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.521852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.522193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.522222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.522618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.522646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.523027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.523057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.523439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.523469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.523849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.523878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.524272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.524301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.524689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.524717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.525049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.525078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.525464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.525494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.525937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.525970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.526356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.526386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.526795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.526824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.527194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.527222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.527589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.527618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.527888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.527918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.528170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-07-15 13:15:17.528203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-07-15 13:15:17.528599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.528628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.529010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.529040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.529415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.529446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.529843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.529872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.530225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.530263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.530661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.530690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.531069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.531098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.531457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.531486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.531826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.531855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.532248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.532278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.532528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.532557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.532936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.532964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.533347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.533377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.533780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.533808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.534175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.534203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.534574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.534604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.534981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.535009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.535268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.535300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.535592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.535620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.535878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.535908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.536312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.536344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.536684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.536712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.537103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.537131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.537505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.537535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.537914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.537942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.538329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.538358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.538727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.538755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.539138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.539167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.539531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.539560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.539925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.539954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.540343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.540373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-07-15 13:15:17.540731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-07-15 13:15:17.540759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.541128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.541156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.541545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.541580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.541841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.541870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.542245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.542275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.542675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.542704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.543081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.543111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.543498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.543528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.543937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.543967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.544340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.544369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.544745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.544774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.545166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.545194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.545578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.545609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.545978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.546008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.546388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.546418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.546805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.546833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.547204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.547240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.547592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.547621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.548020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.548049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.548417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.548447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.548829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.548858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.549250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.549279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.549667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.549695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.550083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.550112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.550387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.550418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.550788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.550816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.551200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.551238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.551589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.551617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.552063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.552093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.552489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.996 [2024-07-15 13:15:17.552520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.996 qpair failed and we were unable to recover it. 00:29:55.996 [2024-07-15 13:15:17.552898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.552926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.553295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.553325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.553703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.553732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.554113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.554142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.554513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.554543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.554924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.554952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.555280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.555310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.555717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.555746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.556107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.556136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.556392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.556422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.556804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.556833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.557222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.557261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.557634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.557667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.558043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.558073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.558460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.558489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.558824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.558853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.559250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.559280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.559690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.559718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.560031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.560060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.560439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.560470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.560857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.560885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.561141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.561169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.561534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.561564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.561910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.561938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.562356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.562386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.562774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.562802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.563187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.563216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.563592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.563622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.563889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.563919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.564303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.564334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.564717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.564746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.565117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.997 [2024-07-15 13:15:17.565147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.997 qpair failed and we were unable to recover it. 00:29:55.997 [2024-07-15 13:15:17.565509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.565539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.565926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.565954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.566302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.566332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.566605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.567016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.567044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.567418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.567447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.567840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.567869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.568255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.568285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.568671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.568700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.569083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.569112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.569436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.569468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.569835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.569865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.570267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.570298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.570718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.570748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.571117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.571146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.571520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.571551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.571912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.571943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.572303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.572332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.572713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.572741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.573125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.573154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.573530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.573948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.573978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.574339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.574369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.574757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.574785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.575059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.575087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.575471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.575764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.575794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.576200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.576239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.576614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.998 [2024-07-15 13:15:17.576643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.998 qpair failed and we were unable to recover it. 00:29:55.998 [2024-07-15 13:15:17.577018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.577047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.577444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.577475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.577835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.577864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.578239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.578269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.578713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.578742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.578988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.579017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.579438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.579469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.579847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.579876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.580226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.580273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.580643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.580673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.581065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.581094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.581482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.581512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.581877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.581905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.582288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.582708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.582737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.583105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.583135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.583566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.583597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.583942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.999 [2024-07-15 13:15:17.583971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:55.999 qpair failed and we were unable to recover it. 00:29:55.999 [2024-07-15 13:15:17.584445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.584475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.584842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.584871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.585283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.585313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.585641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.585671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.586070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.586100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.586365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.586398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.586756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.586786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.587118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.587147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.587513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.587543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.587911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.587940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.588314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.588346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.588760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.588788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.589165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.589194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.589442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.589479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.589869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.589898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.590269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.590299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.590710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.590738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.591123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.591152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.591404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.591434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.591820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.591848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.592249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.592280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.592623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.592651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.593056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.593085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.593307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.593338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.593708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.593736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.594122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.594151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.594481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.594512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.594884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.594913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.595161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.595191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.595593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.000 [2024-07-15 13:15:17.595623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.000 qpair failed and we were unable to recover it. 00:29:56.000 [2024-07-15 13:15:17.595989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.596017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.596408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.596439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.596827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.596856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.597241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.597273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.597685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.597714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.598097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.598125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.598510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.598541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.598930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.598960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.599347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.599377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.599753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.599782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.600170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.600200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.600599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.600630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.600997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.601026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.601416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.601446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.601826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.601855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.602217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.602258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.602632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.602660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.603078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.603107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.603493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.603523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.603865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.603896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.604267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.604299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.604658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.604687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.604955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.604985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.605325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.605361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.605751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.605781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.606162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.606191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.606581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.606613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.606921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.606951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.001 [2024-07-15 13:15:17.607334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.001 [2024-07-15 13:15:17.607364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.001 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.607760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.607790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.608162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.608191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.608572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.608603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.608989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.609018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.609398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.609429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.609700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.609730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.610112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.610141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.610513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.610543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.610932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.610962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.611350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.611379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.611725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.611753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.612119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.612150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.612578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.612609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.612875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.612904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.613326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.613356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.613739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.613768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.614146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.614176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.614583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.614613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.614999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.615029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.615412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.615443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.615839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.615868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.616223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.616275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.616726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.616756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.617147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.617175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.617445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.617473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.617848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.617876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.618265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.618297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.002 [2024-07-15 13:15:17.618693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.002 [2024-07-15 13:15:17.618721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.002 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.619101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.619130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.619494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.619524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.619907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.619936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.620276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.620305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.620703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.620731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.621119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.621148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.621529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.621566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.621911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.621942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.622354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.622384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.622774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.622803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.623187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.623215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.623584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.623615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.623982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.624010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.624401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.624432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.624814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.624843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.625219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.625261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.625636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.625664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.626054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.626083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.626473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.626504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.626888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.626916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.627173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.627204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.627605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.627638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.628033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.628064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.628456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.628486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.628859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.628888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.629282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.629313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.629720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.629748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.630125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.630153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.003 qpair failed and we were unable to recover it. 00:29:56.003 [2024-07-15 13:15:17.630524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.003 [2024-07-15 13:15:17.630555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.630939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.630968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.631250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.631279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.631684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.631714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.632103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.632133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.632511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.632543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.632890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.633302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.633333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.633716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.633745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.634139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.634169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.634561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.634592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.634970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.635000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.635356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.635385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.635788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.635818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.636180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.636209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.636593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.636624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.637007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.637035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.637404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.637434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.637828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.637863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.638253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.638284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.638660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.638689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.639084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.639113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.639524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.639553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.639932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.639962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.640350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.640381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.640769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.640799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.641174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.641203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.641463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.641495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.641873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.641902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.642280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.642311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.004 qpair failed and we were unable to recover it. 00:29:56.004 [2024-07-15 13:15:17.642713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.004 [2024-07-15 13:15:17.642742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.643097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.643126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.643509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.643540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.643988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.644376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.644406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.644795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.644825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.645095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.645126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.645489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.645520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.645887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.645916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.646317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.646348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.646729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.646760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.647020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.647049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.647418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.647785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.647816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.648208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.648254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.648645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.648675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.649064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.649092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.649490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.649521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.649885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.649914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.650297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.650328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.650727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.650756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.651149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.651178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.651582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.651613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.005 qpair failed and we were unable to recover it. 00:29:56.005 [2024-07-15 13:15:17.651987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-07-15 13:15:17.652016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.652277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.652308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.652702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.652732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.653114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.653144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.653504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.653534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.653798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.653834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.654194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.654224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.654650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.654679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.655059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.655088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.655446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.655477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.655870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.655900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.656289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.656320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.656728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.656758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.657144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.657567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.657623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.658045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.658085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.658548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.658600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.659003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.659058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.659483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.659537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.660024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.660076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.660499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.660534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.660820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.660848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.661211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.661250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.661634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.661664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.662022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.662052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.662439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.662470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.662857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.662886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.663289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.663686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.663715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.663972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.664004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.664393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.664445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.006 [2024-07-15 13:15:17.664918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-07-15 13:15:17.664970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.006 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.665445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.665499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.665920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.665970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.666441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.666493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.666910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.666946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.667333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.667366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.667735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.667783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.668205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.668265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.668740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.668789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.669240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.669707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.669737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.670120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.670173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.670615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.670669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.671094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.671146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.671565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.671627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.672098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.672564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.672603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.672976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.673007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.673283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.673314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.673741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.673792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.674224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.674290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.674704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.674757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.675083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.675133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.675568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.675619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.676058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.676110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.676545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.676598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.677033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.677519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.677964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.678014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.678454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.678509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.678938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.678990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.679411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.679467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.007 [2024-07-15 13:15:17.679902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.007 [2024-07-15 13:15:17.679953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.007 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.680427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.680481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.680907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.680958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.681405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.681459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.681914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.681967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.682405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.682458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.682889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.682942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.683227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.683288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.683723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.684270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.684311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.684604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.684641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.685059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.685098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.685380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.685415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.685738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.685776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.686173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.686212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.686647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.686686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.687058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.687096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.687529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.687569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.687942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.687979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.688381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.688420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.688803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.688840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.689249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.689288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.689717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.689762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.690158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.690196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.690602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.690641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.691039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.691079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.691480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.691520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.691930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.691968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.692363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.692403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.692808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.692848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.693268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.008 [2024-07-15 13:15:17.693308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-15 13:15:17.693731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.693769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.694164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.694197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.694618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.694658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.695051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.695090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.695454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.695481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.695855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.695883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.696279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.696303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.696685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.696702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.697072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.697089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.697469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.697498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.697886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.697915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.698294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.698323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.698456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.698482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.698851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.698880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.699224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.699287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.699656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.699684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.700135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.700165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.700554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.700584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.700984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.701014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.701408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.701436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.701832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.701859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.702227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.702263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.702688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.702708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.703066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.703083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.703335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.703730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.703758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.704011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.704040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.704512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.704543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.704908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.704937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-15 13:15:17.705327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.009 [2024-07-15 13:15:17.705357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.705736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.705764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.706152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.706181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.706583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.706608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.707012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.707037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.707407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.707430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.707804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.707825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.708208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.708226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.708591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.708605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.708962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.708977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.709357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.709372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.709735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.709748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.710108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.710122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.710478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.710493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.710820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.710836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.711181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.711195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.711578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.711592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.711923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.711938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.712168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.712181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.712397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.712414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.712761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.712775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.713128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.713143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.713498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.713512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.713717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.713730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.714087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.714100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.714484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.714498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.714872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.714886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.715241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.715257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.715628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.715641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-15 13:15:17.715846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.010 [2024-07-15 13:15:17.715863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.716178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.716192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.716553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.716568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.716979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.716994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.717380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.717738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.717752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.718120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.718135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.718508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.718522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.718901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.718920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.719310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.719691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.719709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.720066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.720084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.720448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.720468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.720810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.720827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.721226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.721255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.721633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.721651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.721905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.721922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.722170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.722189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.722599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.722618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.722984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.723002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.723346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.723364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.723726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.723744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.723971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.723988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.011 [2024-07-15 13:15:17.724355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.011 [2024-07-15 13:15:17.724375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.011 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.724743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.724762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.725119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.725137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.725506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.725526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.725872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.725890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.726255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.726276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.726610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.726627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.726879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.726897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.727274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.727293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.727529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.727547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.727913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.727931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.728296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.728315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.728667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.728685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.729048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.729066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.729435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.729459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.729872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.729894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.730294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.730700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.730727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.731079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.731103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.731482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.731504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.731909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.731931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.732300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.732323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.732722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.732745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.733156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.733178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.733578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.733601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.734002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.734025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.734417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.734844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.734866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.735248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.735271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.735629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.735651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.736046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.012 [2024-07-15 13:15:17.736068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.012 qpair failed and we were unable to recover it. 00:29:56.012 [2024-07-15 13:15:17.736480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.736504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.736900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.736923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.737323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.737346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.737752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.737775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.738164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.738186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.738574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.738597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.738983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.739006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.739398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.739422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.739829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.739859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.740260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.740290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.740691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.740721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.741126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.741156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.741307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.741339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.741760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.741789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.742186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.742215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.742518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.742551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.742961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.742991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.743410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.743440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.743875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.743904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.744292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.744322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.744744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.744772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.745195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.745224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.745627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.745657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.746052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.746081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.746370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.746399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.746787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.746818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.747213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.747260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.747654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.747683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.013 [2024-07-15 13:15:17.748069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.013 [2024-07-15 13:15:17.748099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.013 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.748495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.748526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.748892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.748921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.749304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.749335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.749740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.749769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.750174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.750203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.750615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.751036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.751066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.751460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.751492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.751877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.751907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.752164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.752195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.752665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.752697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.753085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.753115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.753498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.753529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.753924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.754411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.754441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.754832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.754862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.755256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.755286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.755701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.755731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.756158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.756188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.756589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.756620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.757000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.757029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.757412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.757444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.757795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.757824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.758240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.758271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.758531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.758563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.758960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.759386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.759416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.759677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.759706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.014 qpair failed and we were unable to recover it. 00:29:56.014 [2024-07-15 13:15:17.760083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.014 [2024-07-15 13:15:17.760112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.760513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.760544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.760946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.760976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.761368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.761399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.761794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.761824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.762102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.762130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.762497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.762526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.762917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.762948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.763339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.763369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.763647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.763691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.764088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.764116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.764494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.764524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.764922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.765337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.765368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.765738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.765767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.766162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.766192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.766484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.766514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.766908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.766939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.767332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.767361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.767754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.767783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.768144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.768174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.768562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.768976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.769005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.769408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.769440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.769710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.769740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.770166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.770195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.770583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.770614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.771013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.771042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.771429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.771458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.771850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.771879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.015 qpair failed and we were unable to recover it. 00:29:56.015 [2024-07-15 13:15:17.772164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.015 [2024-07-15 13:15:17.772195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.772615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.772645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.773045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.773091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.773466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.773498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.773864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.773894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.774294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.774323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.774718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.774748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.775138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.775169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.775586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.775616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.776010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.776040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.776426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.776455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.776853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.777281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.777312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.777700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.777729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.778136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.778166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.778556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.778586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.778969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.778997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.779345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.779375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.779805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.779835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.780224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.780281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.780686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.780714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.781114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.781144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.781531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.781562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.781954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.781983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.782378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.782410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.782803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.782833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.783254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.783284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.783714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.783743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.784143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.784172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.784570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.016 [2024-07-15 13:15:17.784601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.016 qpair failed and we were unable to recover it. 00:29:56.016 [2024-07-15 13:15:17.785002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.785031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.785416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.785448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.785844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.785875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.786247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.786278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.786667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.786696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.787050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.787078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.787488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.787518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.787910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.787939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.788193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.788222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.788649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.788679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.789074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.789103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.789489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.789519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.789924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.789954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.790339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.790371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.790772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.790802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.791217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.791260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.791666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.791696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.792079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.792108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.792495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.792526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.792911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.792941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.793333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.793364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.793790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.793819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.017 qpair failed and we were unable to recover it. 00:29:56.017 [2024-07-15 13:15:17.794207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.017 [2024-07-15 13:15:17.794246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.794623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.794653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.795042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.795073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.795445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.795477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.795872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.795901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.796287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.796318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.796599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.796629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.797033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.797069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.797497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.797530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.797916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.797945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.798346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.798376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.798787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.798817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.799198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.799227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.799529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.799560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.799980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.800011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.800404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.800434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.800829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.800860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.801272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.801302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.801689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.801718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.802118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.802147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.802541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.802571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.802928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.802959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.803359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.803390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.803814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.803843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.804239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.804275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.804703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.804733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.805132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.805161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.805559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.805591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.805986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.806015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.018 qpair failed and we were unable to recover it. 00:29:56.018 [2024-07-15 13:15:17.806384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.018 [2024-07-15 13:15:17.806414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-07-15 13:15:17.806690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-07-15 13:15:17.806721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-07-15 13:15:17.807107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-07-15 13:15:17.807137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.019 [2024-07-15 13:15:17.807503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.019 [2024-07-15 13:15:17.807534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.019 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.807912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.807943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.808221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.808273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.808687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.808715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.808981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.809015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.809418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.809448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.809847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.809876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.810258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.810288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.810705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.810734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.811131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.811162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.811554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.811585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.811928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.811958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.812367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.812396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.812805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.812834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.813227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.813271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.813707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.813744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.814115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.814143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.814560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.814591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.814994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.815024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.815387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.815416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.815827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.815855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.816269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.816301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.816686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.816716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.817108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.291 [2024-07-15 13:15:17.817138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.291 qpair failed and we were unable to recover it. 00:29:56.291 [2024-07-15 13:15:17.817418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.817449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.817830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.817859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.818103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.818133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.818422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.818453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.818847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.818876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.819270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.819300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.819750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.819780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.820204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.820255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.820649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.820680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.821115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.821144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.821534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.821565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.821966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.821995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.822397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.822428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.822821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.822851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.823255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.823285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.823713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.823742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.824166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.824563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.824594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.824984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.825015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.825398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.825428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.825831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.825861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.826225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.826267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.826641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.827117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.827146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.827494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.827525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.827771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.827800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.828192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.828223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.828626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.828657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.829043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.829074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.829484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.829513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.829952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.829981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.830370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.830405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.830799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.830829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.831226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.831268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.831735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.831765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.832162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.832192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.832677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.832710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.833091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.833121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.833451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.833482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.833834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.292 [2024-07-15 13:15:17.833863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.292 qpair failed and we were unable to recover it. 00:29:56.292 [2024-07-15 13:15:17.834272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.834304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.834710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.834740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.835138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.835167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.835630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.835660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.836061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.836090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.836436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.836468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.836747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.836777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.837179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.837208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.837572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.837602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.837949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.837980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.838331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.838363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.838773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.838803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.839190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.839219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.839600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.839630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.840030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.840060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.840450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.840481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.840884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.840913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.841308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.841338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.841722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.841754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.842157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.842186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.842579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.842609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.842992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.843021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.843397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.843427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.843838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.843867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.844261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.844292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.844695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.844723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.844986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.845018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.845433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.845464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.845860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.845889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.846290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.846320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.846719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.846750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.847056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.847100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.847556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.847587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.847970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.847999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.848392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.848422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.848829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.848859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.849250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.849281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.849690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.849719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.850121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.850467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.850496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.293 [2024-07-15 13:15:17.850873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.293 [2024-07-15 13:15:17.850904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.293 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.851266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.851300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.851700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.851730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.852107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.852136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.852536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.852568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.852980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.853011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.853290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.853324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.853716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.853746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.854130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.854159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.854523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.854553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.854947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.854977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.855388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.855420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.855832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.855862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.856214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.856259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.856637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.856667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.857059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.857090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.857489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.857521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.857899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.857929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.858329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.858360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.858693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.858723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.859112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.859141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.859516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.859548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.859918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.859948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.860353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.860384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.860794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.860823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.861184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.861213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.861712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.861741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.862145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.862174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.862598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.862628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.863013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.863042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.863435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.863466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.863827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.863864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.864264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.864296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.864651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.864682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.865076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.865106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.865374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.865408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.865805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.865834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.866228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.866268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.866676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.866705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.867100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.867129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.867498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.294 [2024-07-15 13:15:17.867529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.294 qpair failed and we were unable to recover it. 00:29:56.294 [2024-07-15 13:15:17.867914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.867943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.868345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.868375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.868655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.868687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.869068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.869097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.869483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.869514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.869918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.869947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.870329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.870360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.870732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.870761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.871157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.871186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.871587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.871616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.872017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.872048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.872453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.872483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.872869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.872898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.873294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.873325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.873726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.873755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.874144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.874174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.874566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.874848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.874879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.875256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.875710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.875739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.876139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.876168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.876505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.876537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.876899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.876929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.877348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.877380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.877772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.877801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.878171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.878200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.878598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.878628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.879080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.879109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.879498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.879529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.879878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.879908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.880252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.880288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.880650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.880679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.881077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.295 [2024-07-15 13:15:17.881106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.295 qpair failed and we were unable to recover it. 00:29:56.295 [2024-07-15 13:15:17.881385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.881414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.881832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.881861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.882260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.882290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.882763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.882792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.883200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.883239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.883595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.883625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.884003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.884032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.884429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.884462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.884862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.884893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.885278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.885309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.885720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.885749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.886148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.886177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.886546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.886578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.886979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.887009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.887342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.887371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.887753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.887782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.888176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.888205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.888564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.888595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.888862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.888894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.889300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.889332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.889722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.889751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.890160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.890189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.890585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.890616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.890976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.891005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.891391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.891421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.891784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.891812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.892209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.892256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.892542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.892570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.892967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.892995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.893383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.893415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.893796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.893827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.894185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.894214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.894491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.894521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.894915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.894945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.895340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.895370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.895783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.895812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.896193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.896222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.896627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.896663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.897054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.897083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.897446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.897476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.897874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.296 [2024-07-15 13:15:17.897904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.296 qpair failed and we were unable to recover it. 00:29:56.296 [2024-07-15 13:15:17.898186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.898583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.898612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.899017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.899046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.899439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.899469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.899855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.899884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.900265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.900297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.900731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.900761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.901149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.901603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.901634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.902020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.902050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.902440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.902471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.902831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.902860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.903268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.903299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.903680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.903709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.904078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.904108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.904497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.904526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.904921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.904951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.905345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.905375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.905785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.905813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.906200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.906242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.906636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.906665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.907061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.907091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.907507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.907563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.908045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.908082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.908494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.908548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.908951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.909006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.909472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.909509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.909904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.909954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.910367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.910421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.910733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.910783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.910968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.911018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.911443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.911494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.911923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.911975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.912412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.912468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.912910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.912963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.913399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.913452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.913870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.913920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.914366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.914854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.914906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.915333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.915388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.915841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.297 [2024-07-15 13:15:17.915894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.297 qpair failed and we were unable to recover it. 00:29:56.297 [2024-07-15 13:15:17.916213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.916296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.916778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.916833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.917285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.917339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.917682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.917734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.918167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.918220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.918693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.918746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.919112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.919164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.919596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.919648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.920093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.920145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.920552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.920608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.921036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.921088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.921520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.921576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.922005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.922054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.922408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.922461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.922898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.922950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.923388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.923440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.923910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.923960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.924404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.924457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.924893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.924942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.925259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.925317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.925744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.925795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.926217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.926265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.926706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.926751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.927167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.927204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.927609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.927645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.928039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.928077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.928501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.928537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.928952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.928989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.929424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.929462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.929741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.929774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.930181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.930217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.930634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.930670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.931060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.931096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.931423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.931459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.931627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.931662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.932036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.932061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.932327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.932350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.932597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.932617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.932888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.932920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.933324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.933360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.298 [2024-07-15 13:15:17.933777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.298 [2024-07-15 13:15:17.933813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.298 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.934201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.934248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.934648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.934685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.935080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.935116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.935497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.935534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.935936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.935970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.936370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.936408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.936804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.936842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.937246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.937284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.937717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.937754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.938151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.938179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.938560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.938591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.938989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.939018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.939412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.939442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.939865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.939893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.940276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.940307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.940698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.940726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.941086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.941110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.941491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.941520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.941903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.941931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.942347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.942377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.942755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.942784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.943220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.943265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.943683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.943707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.944110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.944138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.944534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.944562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.944996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.945013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.945450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.945468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.945844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.945872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.946272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.946302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.946685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.946712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.947128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.947146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.947510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.947526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.947886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.947913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.948293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.948324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.948689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.948714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.949086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.949109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.949460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.949475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.299 qpair failed and we were unable to recover it. 00:29:56.299 [2024-07-15 13:15:17.949852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.299 [2024-07-15 13:15:17.949865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.950250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.950274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.950675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.950700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.951101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.951126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.951521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.951545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.951821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.951843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.952252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.952274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.952543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.952566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.952942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.952964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.953404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.953420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.953634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.953648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.954065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.954088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.954488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.954512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.954881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.954902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.955266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.955291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.955681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.955705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.956064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.956086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.956443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.956458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.956813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.956828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.957186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.957201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.957621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.957636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.957972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.957987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.958322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.958336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.958719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.958733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.959099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.959118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.959489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.959505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.959870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.959885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.960266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.960281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.960649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.960666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.961033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.961049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.961404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.961419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.961782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.961798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.962167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.962182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.962548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.962567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.962923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.962939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.963306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.963324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.963695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.963711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.300 [2024-07-15 13:15:17.964044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.300 [2024-07-15 13:15:17.964061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.300 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.964430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.964446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.964797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.964814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.965167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.965182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.965561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.965973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.965990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.966358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.966374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.966729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.966746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.967085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.967102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.967445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.967461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.967827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.967843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.968198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.968215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.968585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.968602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.968973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.968991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.969415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.969433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.969801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.969816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.970183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.970200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.970547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.970568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.970926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.970947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.971310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.971332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.971737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.971757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.972108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.972130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.972494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.972516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.972812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.973246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.973267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.973680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.973701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.973919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.973941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.974318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.974344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.974782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.974802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.975161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.975182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.975530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.975553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.975944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.975965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.976341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.976364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.976604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.976625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.976864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.976885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.977256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.977276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.977675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.977695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.978017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.978037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.978255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.978277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.978639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.978661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.979062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.979083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.301 [2024-07-15 13:15:17.979461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.301 [2024-07-15 13:15:17.979483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.301 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.979882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.979903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.980169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.980188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.980547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.980569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.980973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.980993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.981344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.981366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.981735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.981757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.982124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.982150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.982542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.982570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.982965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.982993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.983382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.983410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.983814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.983841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.984258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.984287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.984710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.984737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.985120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.985148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.985523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.985550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.985959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.985986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.986375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.986403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.986814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.986840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.987243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.987271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.987531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.987557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.987928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.987954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.988345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.988372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.988780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.988807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.989086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.989112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.989405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.989432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.989842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.989874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.990251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.990280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.990696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.990723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.990994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.991019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.991385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.991412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.991828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.991855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.992257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.992285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.992662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.992691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.993076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.993103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.993504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.993531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.993893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.993920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.994320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.994350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.994742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.994771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.995166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.995195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.995609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.995640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.302 [2024-07-15 13:15:17.996045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.302 [2024-07-15 13:15:17.996075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.302 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.996466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.996497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.996896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.996927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.997149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.997179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.997560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.997592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.998009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.998039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.998424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.998454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.998852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.998881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.999276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.999306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:17.999721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:17.999750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.000175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.000205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.000619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.000649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.001034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.001065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.001462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.001493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.001744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.001776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.002156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.002185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.002607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.002637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.003026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.003055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.003438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.003467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.003868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.003897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.004260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.004293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.004684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.004714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.005109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.005139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.005522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.005552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.005936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.005965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.006369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.006406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.006799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.006828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.007100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.007130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.007509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.007539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.007942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.007971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.008350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.008381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.008784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.008815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.009245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.009276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.009516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.009959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.009988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.010383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.010413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.010811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.010841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.011247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.011278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.011688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.011718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.012106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.012136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.012503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.012535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.303 qpair failed and we were unable to recover it. 00:29:56.303 [2024-07-15 13:15:18.012927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.303 [2024-07-15 13:15:18.012958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.013340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.013373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.013777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.013807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.014211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.014254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.014679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.014708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.015109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.015138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.015513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.015544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.015925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.015955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.016353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.016383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.016798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.016827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.017211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.017249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.017643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.017673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.018070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.018099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.018377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.018408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.018820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.018849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.019254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.019284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.019691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.019720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.020110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.020139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.020512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.020543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.020928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.020957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.021352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.021383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.021792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.021822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.022207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.022246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.022641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.022671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.023071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.023108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.023468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.023498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.023777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.023807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.024208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.024246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.024574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.024603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.025012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.025043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.025446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.025477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.025863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.025892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.026293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.026324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.026753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.026782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.304 [2024-07-15 13:15:18.027168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.304 [2024-07-15 13:15:18.027197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.304 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.027596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.027626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.027983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.028013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.028418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.028449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.028850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.028879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.029244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.029275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.029670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.029700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.030103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.030133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.030506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.030536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.030921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.030951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.031344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.031376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.031764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.031794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.032184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.032213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.032612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.032643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.033036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.033067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.033445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.033476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.033834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.033863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.034261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.034292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.034693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.034723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.035153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.035183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.035570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.035601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.035866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.035894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.036262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.036292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.036722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.036753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.037014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.037042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.037469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.037499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.037859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.037889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.038270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.038300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.038705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.038734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.039162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.039191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.039584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.039620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.040014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.040045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.040456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.040487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.040882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.040911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.041273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.041304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.041700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.041730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.042192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.042223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.042628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.042658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.043062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.043091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.043498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.043528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.043925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.043954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.305 [2024-07-15 13:15:18.044359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.305 [2024-07-15 13:15:18.044390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.305 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.044783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.044813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.045209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.045251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.045641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.045672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.046056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.046087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.046479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.046510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.046905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.046935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.047311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.047342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.047609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.047640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.048042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.048070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.048446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.048476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.048841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.048871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.049270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.049301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.049557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.049588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.050028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.050057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.050328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.050360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.050759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.050789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.051188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.051217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.051580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.051610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.051879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.051908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.052267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.052296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.052704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.052733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.053129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.053160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.053531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.053561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.053939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.053969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.054243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.054277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.054698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.054728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.055120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.055148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.055431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.055462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.055872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.055908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.056313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.056342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.056743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.056773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.057122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.057153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.057518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.057549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.057931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.057961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.058358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.058388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.058760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.058789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.059198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.059228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.059637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.059667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.060058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.060088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.060460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.060491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.306 [2024-07-15 13:15:18.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.306 [2024-07-15 13:15:18.060934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.306 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.061333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.061364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.061830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.061862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.062260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.062290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.062685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.062714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.063099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.063128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.063494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.063526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.063925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.063953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.064342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.064373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.064799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.064828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.065225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.065275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.065654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.065683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.066079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.066109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.066447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.066480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.066865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.066893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.067288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.067320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.067754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.067783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.068069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.068099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.068496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.068527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.068918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.068947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.069333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.069364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.069786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.069815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.070214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.070253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.070647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.070676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.071078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.071109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.071488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.071517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.071808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.071836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.072243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.072274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.072678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.072713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.073088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.073118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.073499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.073531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.073894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.073923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.074307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.074337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.074745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.074774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.075173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.075202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.075565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.075597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.076000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.076030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.076389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.076420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.076817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.076846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.077266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.077298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.077691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.077723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.307 [2024-07-15 13:15:18.078104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.307 [2024-07-15 13:15:18.078134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.307 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.078504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.078534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.078932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.078962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.079362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.079391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.079791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.079821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.080217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.080255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.080644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.080673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.081065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.081095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.081464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.081496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.081879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.081908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.082303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.082335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.082752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.083133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.083161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.083542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.083573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.083969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.084000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.084395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.084424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.084843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.084872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.085255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.085285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.085629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.085659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.086057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.086085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.086489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.086519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.086905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.086934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.087335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.087364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.087763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.087793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.088073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.088102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.088522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.088551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.088947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.088976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.089360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.089396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.089824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.089852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.090256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.090286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.090667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.090696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.091088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.091117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.091499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.091528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.091911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.091946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.092345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.092374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.092745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.308 [2024-07-15 13:15:18.092776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.308 qpair failed and we were unable to recover it. 00:29:56.308 [2024-07-15 13:15:18.093162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.093193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.093606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.093636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.094018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.094049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.094438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.094468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.094859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.094888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.095289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.095320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.095734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.095763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.096155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.096184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.096590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.096620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.097052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.097447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.097480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.097735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.097767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.098148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.098178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.098588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.098620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.099009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.099039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.099426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.099456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.099851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.099883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.100283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.100312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.100704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.100734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.101125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.101154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.101524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.101553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.101834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.101862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.102122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.102153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.102560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.102590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.102868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.102895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.103291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.103321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.103724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.103753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.104135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.104164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.104572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.104602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.105004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.105033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.309 [2024-07-15 13:15:18.105425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.309 [2024-07-15 13:15:18.105457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.309 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.105864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.105906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.106298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.106329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.106734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.106765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.107155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.107185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.107593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.107624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.108008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.108037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.108424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.108456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.108864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.108893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.109153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.109185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.109588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.109619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.109975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.110006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.110388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.110420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.581 [2024-07-15 13:15:18.110678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.581 [2024-07-15 13:15:18.110708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.581 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.111101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.111130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.111531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.111560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.111947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.111978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.112349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.112379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.112638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.112671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.112952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.112982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.113397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.113429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.113808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.113838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.114251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.114282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.114581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.115006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.115035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.115436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.115465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.115863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.115892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.116296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.116718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.116748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.117029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.117059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.117453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.117483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.117879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.117908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.118281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.118312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.118706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.118735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.119136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.119168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.119572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.119601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.120059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.120090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.120456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.120487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.120886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.120916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.582 [2024-07-15 13:15:18.121302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.582 [2024-07-15 13:15:18.121333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.582 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.121704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.121733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.122135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.122169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.122413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.122445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.122840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.122868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.123272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.123303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.123688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.123717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.124117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.124146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.124506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.124537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.124981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.125010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.125410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.125441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.125814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.126224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.126264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.126689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.126719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.127113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.127144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.127531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.127560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.127964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.128384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.128414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.128790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.128819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.129218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.129264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.129642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.129671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.130057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.130086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.130521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.130553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.130958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.130989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.131424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.131454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.131850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.131879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.583 qpair failed and we were unable to recover it. 00:29:56.583 [2024-07-15 13:15:18.132279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.583 [2024-07-15 13:15:18.132309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.132734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.132766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.133202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.133242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.133638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.133674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.133945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.133975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.134347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.134781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.134811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.135195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.135225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.135621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.135651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.136051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.136080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.136451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.136483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.136879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.136907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.137320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.137351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.137783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.138217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.138257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.138691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.138721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.139106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.139135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.139534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.139565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.139945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.139975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.140337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.140368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.140784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.140814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.141212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.141250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.141636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.141665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.142071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.142101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.142497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.142527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.142909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.142939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.143333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.584 [2024-07-15 13:15:18.143363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.584 qpair failed and we were unable to recover it. 00:29:56.584 [2024-07-15 13:15:18.143661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.143689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.144086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.144115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.144513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.144544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.144944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.144974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.145256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.145288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.145723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.145753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.146160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.146190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.146597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.146627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.147017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.147446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.147476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.147870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.147899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.148292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.148602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.148631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.149030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.149059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.149458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.149489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.149887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.149916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.150303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.150338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.150735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.150765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.151026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.151057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.151437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.151467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.151822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.151851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.152254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.152285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.152733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.152763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.153161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.153190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.153606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.153637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.154025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.154054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.154468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.585 [2024-07-15 13:15:18.154498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.585 qpair failed and we were unable to recover it. 00:29:56.585 [2024-07-15 13:15:18.154778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.154805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.155186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.155215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.155586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.155617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.156018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.156048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.156437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.156468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.156868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.156899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.157300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.157328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.157729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.157760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.158191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.158221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.158613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.158646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.158955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.158985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.159389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.159445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.159875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.159926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.160334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.160387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.160821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.160878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.161207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.161277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.161632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.161683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.161997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.162051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.162486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.162539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.162971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.163023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.163516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.163572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.164041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.164095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.164536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.164591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.165021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.165073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.165498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.165552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.165983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.166037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.166487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.166542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.166973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.167026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.167411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.586 [2024-07-15 13:15:18.167464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.586 qpair failed and we were unable to recover it. 00:29:56.586 [2024-07-15 13:15:18.167896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.167957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.168390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.168445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.168886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.168939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.169370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.169409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.169784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.169814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.170207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.170256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.170696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.170743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.171165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.171212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.171675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.171724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.172154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.172202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.172691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.172741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.173175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.173222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.173701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.173750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.174167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.174214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.174686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.174736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.175153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.175200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.175678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.175727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.176143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.176191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.176661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.176710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.177124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.177171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.177533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.177580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.177975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.178007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.178402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.178431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.178842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.178868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.587 [2024-07-15 13:15:18.179273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.587 [2024-07-15 13:15:18.179313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.587 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.179758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.179803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.180251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.180297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.180751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.180796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.181208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.181280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.181713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.181761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.182176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.182222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.182710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.182758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.183180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.183216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.183686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.183723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.184134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.184171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.184585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.184622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.184980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.185016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.185421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.185459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.185852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.185888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.186787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.186829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.187205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.187247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.187651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.187688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.188080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.188116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.188435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.188472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.188880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.188907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.189294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.189317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.189698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.189720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.190114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.190147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.190593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.190631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.191050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.191086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.191353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.191388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.191798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.191833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.192263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.192301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.192746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.192782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.193148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.193184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.193585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.193624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.193998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.194025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.194417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.194448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.194879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.194907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.195176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.195203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.195543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.195572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.195964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.195993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.196384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.196404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.196750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.588 [2024-07-15 13:15:18.196768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.588 qpair failed and we were unable to recover it. 00:29:56.588 [2024-07-15 13:15:18.197143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.197169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.197526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.197555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.197949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.197980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.198374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.198406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.198776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.198805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.199171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.199200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.199473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.199500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.199792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.199819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.200179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.200207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.200584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.200612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.201010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.201039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.201411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.201441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.201822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.201842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.202105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.202121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.202492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.202511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.202886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.202920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.203321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.203351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.203587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.203616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.204018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.204045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.204461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.204843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.204868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.205238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.205264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.205646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.205672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.206080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.206105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.206480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.206907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.206933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.207334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.207352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.207732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.207747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.207961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.207975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.208337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.208361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.208781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.208806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.209200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.209226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.209649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.209668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.210038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.210054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.210401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.210418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.210782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.210796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.211169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.211183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.211570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.211585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.211941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.211955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.212299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.212313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.589 [2024-07-15 13:15:18.212702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.589 [2024-07-15 13:15:18.212716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.589 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.213091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.213456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.213472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.213821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.213835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.214218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.214245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.214601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.214616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.214970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.214985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.215364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.215379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.215577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.215596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.216000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.216019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.216269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.216287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.216662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.216681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.217039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.217058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.217383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.217401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.217780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.217798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.218154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.218179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.218590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.218609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.218861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.218878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.219245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.219264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.219448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.219467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.219855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.219873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.220267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.220285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.220646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.220663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.221027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.221045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.221426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.221444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.221835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.221854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.222240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.222259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.222476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.222839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.222858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.223222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.223251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.223608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.223627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.223974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.223992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.224361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.224381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.224769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.224789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.225150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.225169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.225396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.225417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.225658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.225676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.225936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.225954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.226339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.226363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.226727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.226751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.227034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.227057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.590 qpair failed and we were unable to recover it. 00:29:56.590 [2024-07-15 13:15:18.227310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.590 [2024-07-15 13:15:18.227335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.227755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.227780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.228186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.228210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.228467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.228493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.228759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.228785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.229185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.229210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.229591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.229617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.229872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.229898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.230285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.230311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.230695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.230720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.231110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.231134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.231497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.231522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.231910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.231934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.232338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.232362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.232769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.232800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.233180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.233205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.233457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.233481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.233852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.233875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.234264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.234290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.234698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.234723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.235006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.235030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.235418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.235443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.235845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.235871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.236272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.236296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.236674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.236700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.237110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.237134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.237538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.237568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.237958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.237987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.238386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.238418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.238827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.238856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.239210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.239249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.239624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.239654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.239929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.239957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.240385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.240416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.240813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.240843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.241212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.241249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.241692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.241721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.242116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.242145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.242538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.242568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.242951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.242980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.243384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.591 [2024-07-15 13:15:18.243414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.591 qpair failed and we were unable to recover it. 00:29:56.591 [2024-07-15 13:15:18.243691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.243722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.244143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.244172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.244619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.244649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.245046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.245077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.245471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.245503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.245898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.245927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.246327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.246358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.246743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.246772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.247176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.247206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.247575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.247605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.248031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.248060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.248461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.248492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.248888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.248917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.249308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.249344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.249739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.249768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.250168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.250198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.250612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.250643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.251032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.251062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.251460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.251492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.251876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.251905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.252301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.252744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.252774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.592 [2024-07-15 13:15:18.253157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.592 [2024-07-15 13:15:18.253187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.592 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.253578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.253611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.254004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.254034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.254419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.254449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.254849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.254879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.255272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.255303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.255691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.255720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.256073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.256102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.256486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.256517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.256899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.256929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.257323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.257353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.257734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.257763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.258151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.258180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.258581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.258611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.259009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.259038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.259417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.259448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.259837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.259868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.260269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.260299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.260571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.260605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.260997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.261028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.261343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.261374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.261798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.261827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.262255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.262286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.262673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.262706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.263091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.263120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.263536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.263568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.593 [2024-07-15 13:15:18.263960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.593 [2024-07-15 13:15:18.263989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.593 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.264377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.264408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.264802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.264832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.265186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.265215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.265596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.265626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.265980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.266017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.266406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.266438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.266792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.266822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.267188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.267217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.267617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.267649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.267923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.267953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.268356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.268387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.268813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.268842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.269242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.269273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.269727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.269756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.270186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.270215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.270669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.270972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.271002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.271407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.271437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.271825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.271856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.272279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.272310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.272663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.272694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.273171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.273200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.273594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.273625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.274018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.274048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.274444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.274476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.274822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.274852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.594 qpair failed and we were unable to recover it. 00:29:56.594 [2024-07-15 13:15:18.275249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.594 [2024-07-15 13:15:18.275278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.275644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.275673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.276076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.276106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.276473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.276504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.276771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.276803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.277201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.277241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.277683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.277712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.278111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.278141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.278514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.278545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.278943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.278973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.279360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.279390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.279784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.279813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.280216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.280259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.280669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.280698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.281093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.281123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.281377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.281408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.281794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.281823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.282255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.282288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.282637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.282676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.283055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.283085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.283486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.283517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.283912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.283943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.284323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.284354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.284759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.284790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.285186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.285215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.285603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.285633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.595 [2024-07-15 13:15:18.286022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.595 [2024-07-15 13:15:18.286052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.595 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.286458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.286489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.286879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.286909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.287310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.287341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.287721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.287751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.288144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.288174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.288582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.288613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.289005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.289034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.289299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.289722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.289751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.290149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.290178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.290596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.290626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.291023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.291052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.291447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.291478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.291865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.291895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.292299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.292329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.292725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.292756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.293181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.293212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.293622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.293652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.294014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.294045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.294324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.294358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.294750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.294780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.295181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.295211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.295610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.295642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.296035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.296066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.296419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.296450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.596 [2024-07-15 13:15:18.296833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.596 [2024-07-15 13:15:18.296864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.596 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.297258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.297288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.297689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.297718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.298105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.298134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.298517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.298548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.298942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.298972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.299371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.299408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.299813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.300128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.300156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.300554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.300584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.300985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.301014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.301420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.301450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.301839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.301868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.302263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.302296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.302692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.302723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.303104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.303133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.303533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.303563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.303919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.303949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.304331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.304362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.304763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.304793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.305196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.305225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.305595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.305626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.306019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.306048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.306483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.306514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.306948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.306977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.597 [2024-07-15 13:15:18.307374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.597 [2024-07-15 13:15:18.307404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.597 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.307811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.307841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.308103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.308137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.308545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.308575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.308974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.309005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.309341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.309372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.309777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.309806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.310224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.310282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.310675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.310705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.310990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.311021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.311386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.311416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.311799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.311828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.312228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.312269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.312686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.312718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.312974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.313007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.313389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.313419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.313822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.313852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.314245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.314277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.314701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.314730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.315130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.315160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.315551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.598 [2024-07-15 13:15:18.315581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.598 qpair failed and we were unable to recover it. 00:29:56.598 [2024-07-15 13:15:18.315975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.316011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.316277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.316308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.316688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.316718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.317113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.317143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.317504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.317535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.317925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.317954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.318350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.318380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.318814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.318843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.319225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.319265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.319667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.319697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.320090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.320119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.320521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.320551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.320905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.320935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.321331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.321362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.321804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.321833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.322278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.322671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.322700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.323090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.323118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.323499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.323529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.323819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.323849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.324241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.324272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.324692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.324723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.325120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.325149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.325542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.325574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.325971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.326000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.326378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.326410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.326805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-15 13:15:18.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.599 qpair failed and we were unable to recover it. 00:29:56.599 [2024-07-15 13:15:18.327241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.327272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.327659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.327690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.328075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.328105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.328487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.328517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.328932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.328962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.329356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.329386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.329794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.329823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.330219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.330258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.330670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.330698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.331097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.331129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.331492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.331521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.331908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.331937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.332337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.332367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.332803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.332837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.333221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.333263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.333696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.333726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.334077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.334107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.334374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.334406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.334801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.334831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.335239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.335270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.335713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.335742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.336136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.336165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.336456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.336489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.336881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.336911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.337257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.337289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.337688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.337717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.600 qpair failed and we were unable to recover it. 00:29:56.600 [2024-07-15 13:15:18.338106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-15 13:15:18.338135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.338532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.338565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.338962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.338991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.339391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.339422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.339868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.339898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.340297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.340327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.340724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.340753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.341109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.341137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.341519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.341548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.341946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.341975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.342371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.342402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.342804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.342834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.343103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.343134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.343512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.343542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.343938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.343978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.344358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.344389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.344779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.344811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.345228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.345269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.345678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.345707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.346103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.346132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.346496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.346528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.346909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.346938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.347342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.347372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.347789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.348238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.348593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.348622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.349014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.349043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.349431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.349461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.349856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.350287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-15 13:15:18.350319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.601 qpair failed and we were unable to recover it. 00:29:56.601 [2024-07-15 13:15:18.350678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.350707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.351100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.351130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.351496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.351906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.351936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.352334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.352364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.352637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.352667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.353096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.353125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.353493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.353526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.353771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.353802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.354180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.354210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.354490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.354521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.354917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.354946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.355331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.355364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.355784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.355813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.356072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.356102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.356349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.356381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.356800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.356829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.357239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.357270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.357681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.357710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.358070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.358100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.358454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.358485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.358869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.358898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.359127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.359157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.359431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.359464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.359915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.359951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.360338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.360368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.360643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.360674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.360957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.360987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.361381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.361411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.361799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.361829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.362226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.362277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.362695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.362725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.363118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.363147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.363420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.363451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.363871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.363902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.364333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-15 13:15:18.364363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.602 qpair failed and we were unable to recover it. 00:29:56.602 [2024-07-15 13:15:18.364764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.364793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.365052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.365083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.365498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.365529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.365983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.366013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.366377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.366409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.366642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.366670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.367059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.367088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.367475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.367506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.367773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.367804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.368159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.368191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.368590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.368621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.368866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.368896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.369278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.369307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.369725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.369754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.370150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.370179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.370544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.370575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.370925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.370956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.371360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.371390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.371772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.371802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.372053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.372085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.372448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.372478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.372875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.372904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.373300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.373331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.373730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.373759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.374135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.374164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.374534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.374565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.374960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.374989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.375374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.375406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.375784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.375820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.376218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.376260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.376643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.376674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.377063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.377095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.377466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.377495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.377768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.377798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.378201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.378238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.378528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.378556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.378951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.378981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.379384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.379415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.379808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.379838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.380223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.380262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.603 [2024-07-15 13:15:18.380631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.603 [2024-07-15 13:15:18.380663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.603 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.381079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.381109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.381507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.381538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.381908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.381938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.382338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.382369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.382759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.382790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.383184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.383213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.383604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.383634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.384019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.384049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.384404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.384435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.384826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.384855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.385246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.385278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.385699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.385729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.386153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.386183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.386571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.386603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.387000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.387030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.387368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.387399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.387656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.387687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.387966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.387995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.388387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.388418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.388820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.388851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.389249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.389280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.389700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.389730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.390117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.390147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.390559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.390590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.390830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.390860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.391145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.391175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.391559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.391591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.391948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.391984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.392341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.392372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.392639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.392666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.393029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.393060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.393441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.393473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.393870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.604 [2024-07-15 13:15:18.394305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.604 [2024-07-15 13:15:18.394335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.604 qpair failed and we were unable to recover it. 00:29:56.605 [2024-07-15 13:15:18.394741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-07-15 13:15:18.394770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-07-15 13:15:18.395041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-07-15 13:15:18.395071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-07-15 13:15:18.395435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-07-15 13:15:18.395465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.605 [2024-07-15 13:15:18.395866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.605 [2024-07-15 13:15:18.395895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.605 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.396290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.396325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.396727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.396758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.397148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.397178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.397609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.397642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.398025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.398057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.398452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.398485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.398860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.876 [2024-07-15 13:15:18.398892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.876 qpair failed and we were unable to recover it. 00:29:56.876 [2024-07-15 13:15:18.399272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.399303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.399716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.399746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.400059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.400088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.400374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.400405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.400799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.400829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.401115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.401143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.401541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.401570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.401842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.401870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.402268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.402298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.402719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.402750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.403133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.403164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.403563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.403593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.403998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.404029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.404307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.404341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.404765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.404795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.405191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.405221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.405605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.405635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.406028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.406059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.406441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.406472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.406750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.406783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.407175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.407206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.407634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.407665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.408067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.408129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.408552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.408608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.409008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.409060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.409376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.409433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.409895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.409949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.410385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.410439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.410880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.410935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.411346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.411795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.411847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.412204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.412298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.412624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.412675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.413005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.413054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.413487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.413542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.413983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.414036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.414471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.414520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.414844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.414885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.415221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.415266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.415705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.877 [2024-07-15 13:15:18.415750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.877 qpair failed and we were unable to recover it. 00:29:56.877 [2024-07-15 13:15:18.416194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.416278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.416717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.416768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.417087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.417141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.417574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.417627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.418095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.418148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.418618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.418671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.419002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.419056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.419491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.419546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.419988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.420040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.420497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.420548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.420886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.420922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.421348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.421379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.421620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.421649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.422060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.422112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.422525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.422579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.423010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.423062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.423502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.423557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.423990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.424037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.424471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.424525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.425005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.425058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.425386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.425441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.425822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.425872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.426327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.426400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.426809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.426864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.427334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.427388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.427849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.427903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.428336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.428389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.428835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.428887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.429299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.429353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.429827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.429879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.430317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.430372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.430858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.430912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.431351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.431391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.431849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.431880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.432280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.432313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.432566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.432605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.433006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.433046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.433485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.433527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.433930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.433970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.434321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.878 [2024-07-15 13:15:18.434362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.878 qpair failed and we were unable to recover it. 00:29:56.878 [2024-07-15 13:15:18.434763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.434803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.435211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.435266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.435685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.435728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.436103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.436128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.436597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.436623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.436858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.436898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.437300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.437341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.437645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.437686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.438120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.438151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.438522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.438549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.438841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.438864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.439253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.439277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.439684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.439709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.440108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.440132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.440427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.440451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.440854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.440878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.441267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.441291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.441728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.441752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.442136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.442159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.442548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.442572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.442956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.442984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.443291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.443318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.443716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.443750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.444135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.444163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.444568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.444598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.444981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.445009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.445335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.445364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.445775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.445802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.446210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.446248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.446540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.446571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.446968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.446995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.447392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.447421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.447840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.447868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.448271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.448299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.448695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.448724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.449107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.449135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.449569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.449598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.449879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.449907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.450285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.450314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.450726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.879 [2024-07-15 13:15:18.450754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.879 qpair failed and we were unable to recover it. 00:29:56.879 [2024-07-15 13:15:18.451155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.451183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.451564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.451594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.451836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.451865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.452304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.452334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.452708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.452736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.453023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.453049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.453442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.453471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.453861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.453889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.454158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.454185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.454508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.454537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.455000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.455030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.455441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.455473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.455746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.455779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.455974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.456005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.456156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.456189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.456572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.456604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.456996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.457027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.457440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.457472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.457882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.457912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.458162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.458192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.458593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.458625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.458864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.458894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.459153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.459190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.459488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.459520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.459901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.459930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.460341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.460372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.460803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.460833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.461261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.461292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.461668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.461700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.462121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.462151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.462524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.462555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.462939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.462970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.463372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.463403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.463677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.463707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.464086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.464116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.464515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.464546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.464959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.464990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.465398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.465429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.465828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.465858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.466312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.880 [2024-07-15 13:15:18.466344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.880 qpair failed and we were unable to recover it. 00:29:56.880 [2024-07-15 13:15:18.466748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.466778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.467047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.467076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.467393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.467423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.467820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.468258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.468289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.468707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.468735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.469121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.469150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.469538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.469569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.469968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.469997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.470402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.470433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.470778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.470808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.471204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.471243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.471676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.471706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.472094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.472124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.472500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.472531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.472787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.473217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.473262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.473654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.473682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.473959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.473987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.474387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.474418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.474828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.474857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.475249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.475279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.475678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.475713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.476114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.476143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.476512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.476544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.476906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.476936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.477306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.477338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.477752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.477782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.478181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.478210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.478552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.478582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.478970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.478999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.479389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.479420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.881 qpair failed and we were unable to recover it. 00:29:56.881 [2024-07-15 13:15:18.479801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.881 [2024-07-15 13:15:18.479830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.480281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.480313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.480696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.480726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.481093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.481123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.481408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.481440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.481855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.481884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.482264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.482295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.482593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.482622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.482989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.483019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.483431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.483463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.483857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.483886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.484280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.484311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.484574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.484604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.485006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.485035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.485418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.485448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.485849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.485877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.486272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.486302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.486713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.486743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.487141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.487171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.487491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.487522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.487871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.487901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.488298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.488327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.488728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.488758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.489125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.489154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.489581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.489612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.489886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.489913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.490322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.490351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.490718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.490748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.491176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.491206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.491619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.491649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.492114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.492154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.492477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.492507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.492934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.492962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.493349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.493381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.493806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.493835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.494118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.494148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.494538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.494568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.494957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.494987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.495378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.495407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.495825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.495855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.882 [2024-07-15 13:15:18.496255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.882 [2024-07-15 13:15:18.496286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.882 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.496555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.496584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.496954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.496983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.497383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.497413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.497809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.497838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.498216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.498254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.498633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.498663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.498965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.498993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.499392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.499421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.499816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.499844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.500264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.500295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.500586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.500619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.501048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.501077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.501472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.501502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.501909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.501939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.502327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.502357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.502743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.502771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.503205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.503559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.503588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.503987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.504016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.504408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.504438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.504848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.504877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.505253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.505286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.505604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.505632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.505908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.505940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.506351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.506381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.506747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.506775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.507202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.507240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.507649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.507679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.507955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.507983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.508362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.508404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.508822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.509248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.509278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.509569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.509601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.510004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.510033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.883 [2024-07-15 13:15:18.510473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.883 [2024-07-15 13:15:18.510503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.883 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.510875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.510906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.511269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.511300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.511574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.511606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.511909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.511940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.512207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.512266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.512542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.512570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.513001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.513031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.513448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.513480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.513856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.513887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.514290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.514321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.514669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.514699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.515087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.515117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.515495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.515525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.515918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.515949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.516246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.516279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.516668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.516697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.517064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.517093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.517469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.517499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.517910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.517941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.518322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.518353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.518767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.518795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.519196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.519227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.519677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.519707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.520113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.520142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.520527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.520557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.520823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.520855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.521286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.521316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.521731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.521764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.522154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.522185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.522577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.522619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.523029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.523061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.523432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.523463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.523870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.523900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.524199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.524240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.524724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.524762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.884 [2024-07-15 13:15:18.525182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.884 [2024-07-15 13:15:18.525211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.884 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.525646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.525676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.526077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.526107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.526504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.526536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.526830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.526859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.527225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.527267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.527559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.527587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.527991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.528020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.528421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.528451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.528738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.528765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.529155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.529184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.529586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.529617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.530022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.530051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.530436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.530467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.530884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.530915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.531350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.531382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.531780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.531810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.532174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.532205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.532518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.532548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.532977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.533007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.533283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.533314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.533726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.533755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.534155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.534184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.534631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.534662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.535084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.535114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.535490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.535521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.535914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.535943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.885 qpair failed and we were unable to recover it. 00:29:56.885 [2024-07-15 13:15:18.536335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.885 [2024-07-15 13:15:18.536365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.536731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.537031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.537060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.537452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.537482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.537850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.537880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.538158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.538189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.538575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.538605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.538870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.538899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.539303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.539333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.539738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.539768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.540149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.540179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.540573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.540602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.540973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.541010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.541309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.541340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.541742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.541773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.542083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.542112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.542544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.542574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.542964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.542995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.543413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.543443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.543830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.543859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.544254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.544285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.544667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.544698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.545041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.545070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.545506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.545538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.545889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.545919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.546220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.546260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.546547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.546577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.546839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.546870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.547285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.547730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.547760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.548170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.548201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.548549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.548578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.548984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.549014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.549309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.549341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.549745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.549774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.550152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.886 [2024-07-15 13:15:18.550182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.886 qpair failed and we were unable to recover it. 00:29:56.886 [2024-07-15 13:15:18.550674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.550704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.551108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.551139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.551586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.551616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.552017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.552053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.552450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.552481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.552876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.552905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.553312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.553345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.553741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.553769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.554192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.554221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.554653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.554685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.555071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.555101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.555380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.555413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.555815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.555845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.556250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.556280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.556713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.556742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.557106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.557135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.557488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.557519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.557942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.557972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.558242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.558272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.558695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.558724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.559059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.559090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.559537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.559566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.559997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.560026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.560429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.560460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.560713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.560742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.561130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.561159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.561530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.561561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.561949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.561979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.562434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.562465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.562877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.562906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.563313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.563344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.563765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.563795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.564068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.564097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.564479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.564509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.564916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.564946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.565338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.565369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.887 qpair failed and we were unable to recover it. 00:29:56.887 [2024-07-15 13:15:18.565770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.887 [2024-07-15 13:15:18.565800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.566180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.566211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.566688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.566718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.567113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.567143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.567527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.567557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.567958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.567987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.568325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.568354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.568774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.568808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.569202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.569244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.569649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.569678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.570030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.570060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.570386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.570417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.570667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.570697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.571064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.571093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.571396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.571426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.571836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.571865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.572148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.572175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.572594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.572625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.572980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.573010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.573380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.573410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.573683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.573713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.574121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.574150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.574569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.574601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.575027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.575057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.575375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.575404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.575812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.575841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.576276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.576306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.576702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.576731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.577132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.577162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.577567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.577597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.578006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.578037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.578453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.578485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.578850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.578879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.579314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.579344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.579743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.579773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.580163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.580191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.580630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.580660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.580938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.580966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.581375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.581404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.581799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.888 [2024-07-15 13:15:18.581828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.888 qpair failed and we were unable to recover it. 00:29:56.888 [2024-07-15 13:15:18.582242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.582272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.582706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.582737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.583145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.583175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.583518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.583549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.583919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.583948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.584240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.584272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.584586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.584615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.584962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.584997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.585308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.585339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.585645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.585673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.586067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.586097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.586513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.586543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.586958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.586987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.587275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.587304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.587590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.587617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.587791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.587819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.588178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.588208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.588619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.588649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.589051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.589079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.589478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.589508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.589903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.589932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.590430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.590461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.590857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.590887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.591283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.591313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.591706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.591736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.592136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.592166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.592650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.592682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.593140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.593170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.593581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.593612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.594007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.594038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.594317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.594352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.594766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.594796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.595240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.595270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.595599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.595627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.595922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.595952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.596318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.596351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.596646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.596679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.597099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.597131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.889 [2024-07-15 13:15:18.597541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.889 [2024-07-15 13:15:18.597570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.889 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.597957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.597985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.598161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.598191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.598617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.598647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.599054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.599083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.599251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.599283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.599725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.599755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.600157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.600186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.600520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.600550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.600958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.600994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.601302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.601331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.601708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.601736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.602140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.602617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.602647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.602907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.602939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.603333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.603761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.603791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.604183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.604211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.604635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.604666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.605066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.605095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.605502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.605533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.605950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.605979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.606423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.606453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.606823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.606853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.607123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.607155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.607445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.607476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.607902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.607931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.608303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.890 [2024-07-15 13:15:18.608751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.890 [2024-07-15 13:15:18.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.890 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.609141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.609170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.609562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.609594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.609992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.610023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.610312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.610343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.610667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.610697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.611097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.611126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.611516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.611547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.611943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.611973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.612334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.612365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.612774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.612803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.613134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.613163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.613554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.613585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.613888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.613917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.614160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.614191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.614599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.614631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.615016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.615046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.615339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.615367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.615770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.615799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.616206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.616256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.616619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.616649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.616995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.617032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.617403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.617432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.617854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.617883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.618239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.618270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.618704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.618733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.619003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.619030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.619449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.619478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.619881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.619910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.620332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.620363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.620644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.620672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.621067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.621095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.621589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.621619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.621994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.622022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.622341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.622371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.622787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.622816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.623173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.623204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.623620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.623650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.624046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.624077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.624408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.624438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.624758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.891 [2024-07-15 13:15:18.624788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.891 qpair failed and we were unable to recover it. 00:29:56.891 [2024-07-15 13:15:18.625165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.625194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.625612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.625643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.626033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.626063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.626454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.626484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.626858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.626888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.627247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.627278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.627677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.627707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.628112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.628141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.628524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.628555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.628962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.628993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.629328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.629359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.629759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.629788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.630189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.630218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.630643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.630674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.631036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.631066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.631346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.631376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.631756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.631786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.632225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.632265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.632540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.632568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.632957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.632986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.633496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.633532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.633837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.633866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.634268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.634298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.634689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.634718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.635108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.635138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.635482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.635514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.635829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.635858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.636148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.636176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.636502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.636533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.636945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.636975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.637364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.637394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.637653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.637683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.638064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.638094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.638497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.638527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.892 [2024-07-15 13:15:18.638997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.892 [2024-07-15 13:15:18.639027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.892 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.639404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.639433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.639695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.639723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.640153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.640182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.640472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.640500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.640918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.640946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.641365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.641395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.641820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.641849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.642250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.642280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.642722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.642751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.643125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.643153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.643383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.643413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.643805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.643834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.644251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.644284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.644695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.644724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.645173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.645202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.645526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.645555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.645957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.645986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.646265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.646297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.646715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.646744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.647022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.647050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.647336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.647366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.647771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.647801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.648195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.648224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.648647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.648676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.649109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.649138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.649427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.649462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.649736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.649768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.650055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.650085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.650498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.650528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.650834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.650864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.651121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.651149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.651571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.651603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.651974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.652004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.652394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.652425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.652816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.652844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.653212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.653251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.653628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.653657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.653933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.653963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.654287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.654317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.893 [2024-07-15 13:15:18.654707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.893 [2024-07-15 13:15:18.654737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.893 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.655127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.655157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.655535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.655567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.655964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.655993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.656396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.656428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.656834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.656863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.657251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.657283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.657712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.657742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.658063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.658093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.658375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.658404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.658809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.658838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.659102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.659133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.659463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.659493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.659805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.659833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.660149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.660177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.660589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.660619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.660996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.661026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.661305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.661336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.661773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.661802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.662193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.662223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.662691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.662722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.663152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.663181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.663464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.663494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.663778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.663808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.664087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.664117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.664529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.664560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.664963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.664999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.665395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.665425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.665800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.665830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.666224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.666265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.666765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.666794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.667157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.667186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.667582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.667611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.668014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.668043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.668325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.668354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.668754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.668784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.669181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.669210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.669600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.669631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.670025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.670055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.670347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.670376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.670785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.894 [2024-07-15 13:15:18.670816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.894 qpair failed and we were unable to recover it. 00:29:56.894 [2024-07-15 13:15:18.671175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.671204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.671659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.671689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.672089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.672119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.672581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.672612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.672873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.672901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.673267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.673297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.673719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.673750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.674121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.674151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.674410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.674439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.674845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.674875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.675284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.675316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.675740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.675769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.676138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.676170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.676576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.676607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.677017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.677046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.677432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.677462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.677827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.677859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.678166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.678195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.678449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.678477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.678886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.678916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.679307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.679359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.679750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.679781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.680182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.680211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.680614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.680646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.681017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.681045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.681415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.681450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.681849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.681879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.682263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.682292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.682700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.682728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.683116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.683145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.683568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.683597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.683989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.684019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.684417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.684446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.895 [2024-07-15 13:15:18.684825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.895 [2024-07-15 13:15:18.684855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.895 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.685260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.685291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.685728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.685757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.686169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.686199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.686621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.686651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.687086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.687116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.687374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.687404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.687776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.687806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.688184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.688214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.688486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.688517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.688927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.688957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.689268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.689713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.689742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.690143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.690173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.690582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.690612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:56.896 [2024-07-15 13:15:18.691007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.896 [2024-07-15 13:15:18.691037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:56.896 qpair failed and we were unable to recover it. 00:29:57.167 [2024-07-15 13:15:18.691435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.167 [2024-07-15 13:15:18.691466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.167 qpair failed and we were unable to recover it. 00:29:57.167 [2024-07-15 13:15:18.691881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.691914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.692294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.692326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.692711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.692742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.693219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.693263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.693453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.693484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.693657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.693688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.694039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.694068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.694369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.694398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.694861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.694891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.695293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.695323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.695596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.695625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.696065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.696095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.696467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.696500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.696907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.696937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.697205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.697248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.697632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.697669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.697942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.697974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.698243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.698274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.698708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.698737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.699100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.699130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.699383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.699415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.699816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.699847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.700216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.700255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.700621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.700651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.701107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.701417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.701447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.701835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.701866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.702328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.702358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.702753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.702785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.703189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.703219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.703593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.703623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.704011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.704043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.704426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.704458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.704709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.704739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.705102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.705134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.705520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.705551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.705943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.705973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.706337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.706367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.706754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.706786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.707182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.707212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.168 [2024-07-15 13:15:18.707599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.168 [2024-07-15 13:15:18.707629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.168 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.708020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.708051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.708458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.708917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.708946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.709332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.709359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.709754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.709780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.710181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.710206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.710585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.710613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.711011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.711038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.711430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.711459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.711868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.711896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.712306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.712337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.712621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.712650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.713045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.713074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.713475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.713507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.713808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.713846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.714276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.714309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.714742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.714772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.715169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.715199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.715623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.715657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.716046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.716079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.716352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.716384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.716787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.716819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.717198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.717252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.717635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.717665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.718041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.718071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.718462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.718494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.718878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.718908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.719288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.719319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.719750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.719782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.720165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.720196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.720627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.720659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.721087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.721118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.721494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.721526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.721929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.721957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.722346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.722377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.722760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.722791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.723195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.723224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.725107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.725176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.725604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.725643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.726044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.726074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-07-15 13:15:18.726453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.169 [2024-07-15 13:15:18.726484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.726879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.726911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.727305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.727337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.727719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.727749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.728142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.728173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.728583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.728614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.728994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.729023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.729404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.729435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.729879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.729909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.730262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.730294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.730722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.730752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.731140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.731170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.731578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.731608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.732007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.732038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.732423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.732459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.732855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.732885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.733268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.733299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.733733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.733763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.734155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.734184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.734461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.734495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.734887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.734918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.735309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.735340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.735722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.735752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.736136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.736167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.736572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.736603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.736997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.737027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.737412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.737443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.737821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.737851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.738271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.738302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.738732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.738762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.739151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.739182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.739559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.739592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.739994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.740024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.740417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.740448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.740844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.740876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.741168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.741196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.741587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.741617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.742014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.742043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.742434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.742466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.742839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.742868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.743260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.170 [2024-07-15 13:15:18.743291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-07-15 13:15:18.743727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.743757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.744139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.744169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.744549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.744580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.744971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.745000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.745397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.745427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.745831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.745862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.746252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.746285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.746679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.746708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.747098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.747127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.747533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.747565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.747994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.748024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.748423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.748454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.748818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.748848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.749124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.749162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.749549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.749581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.749975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.750005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.750282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.750316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.750717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.750746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.751144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.751175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.751569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.751602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.751993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.752023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.752406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.752437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.752828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.752856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.753255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.753287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.753719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.753748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.754134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.754164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.754552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.754582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.754992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.755022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.755424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.755456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.755841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.755870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.756262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.756293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.756707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.756738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.757011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.757043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.757427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.757457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.757834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.757865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.758269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.758300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.758713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.758741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.759020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.759051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.759512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.759543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.759823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.759856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.171 [2024-07-15 13:15:18.760255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.171 [2024-07-15 13:15:18.760287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.171 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.760683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.760713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.761098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.761127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.761497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.761924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.761954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.762343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.762374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.762771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.762800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.763169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.763199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.763612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.763643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.763995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.764025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.764413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.764444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.764844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.764874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.765276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.765308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.765746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.765776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.766171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.766201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.766612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.766643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.767039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.767070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.767453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.767484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.767869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.767899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.768268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.768298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.768695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.768723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.769107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.769137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.769523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.769554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.769830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.769863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.770258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.770289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.770690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.770719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.771099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.771129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.771508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.771540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.771954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.771984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.772374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.772407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.772777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.772813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.773201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.773244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.773631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.773662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.774058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.172 [2024-07-15 13:15:18.774088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.172 qpair failed and we were unable to recover it. 00:29:57.172 [2024-07-15 13:15:18.774452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.774482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.774874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.774903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.775268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.775299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.775698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.775727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.776092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.776123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.776497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.776528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.776923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.776960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.777357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.777388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.777787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.777816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.778211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.778253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.778641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.778671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.779059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.779087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.779432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.779463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.779875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.779906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.780314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.780346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.780741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.780771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.781197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.781226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.781512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.781545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.781938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.781967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.782357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.782389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.782745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.782775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.783177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.783206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.783650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.783682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.784064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.784093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.784484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.784885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.784915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.785312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.785343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.785734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.785764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.786114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.786143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.786548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.786578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.786975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.787004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.787397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.787429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.787827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.787858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.788256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.788288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.788720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.788749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.789122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.789152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.789547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.789578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.789979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.790009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.790451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.790482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.790866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.790895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.791283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.173 [2024-07-15 13:15:18.791313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.173 qpair failed and we were unable to recover it. 00:29:57.173 [2024-07-15 13:15:18.791601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.791631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.791910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.791943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.792325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.792356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.792734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.792763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.793164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.793193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.793604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.793641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.794036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.794066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.794451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.794483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.794882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.794913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.795300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.795330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.795730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.795759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.796149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.796179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.796576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.796608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.796935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.796964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.797359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.797390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.797771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.797800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.798208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.798251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.798652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.798682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.799060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.799090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.799366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.799398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.799792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.799821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.800215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.800257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.800623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.800653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.801028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.801058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.801451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.801482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.801879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.801909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.802276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.802305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.802709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.802740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.803132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.803161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.803529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.803558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.803944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.803973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.804345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.804376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.804803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.804834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.805266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.805298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.805599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.805630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.806023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.806053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.806450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.806480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.806877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.806906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.807294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.807325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.807727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.807758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.174 [2024-07-15 13:15:18.808134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.174 [2024-07-15 13:15:18.808163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.174 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.808537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.808568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.808945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.808974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.809333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.809365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.809763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.809793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.810156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.810191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.810652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.810683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.811068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.811098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.811489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.811520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.811873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.811901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.812318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.812348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.812742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.812772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.813166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.813195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.813604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.813636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.814018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.814049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.814423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.814454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.814847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.814876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.815274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.815306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.815699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.815729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.816119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.816149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.816546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.816576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.816965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.816996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.817392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.817423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.817820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.817850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.818263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.818293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.818676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.818706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.819081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.819111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.819542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.819573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.819997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.820026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.820288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.820318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.820759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.820788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.821172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.821201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.821624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.821657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.822062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.822092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.822469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.822500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.822880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.822910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.823308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.823338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.823607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.823636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.824027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.824057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.824447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.824477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.824872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.824902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.175 [2024-07-15 13:15:18.825304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.175 [2024-07-15 13:15:18.825334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.175 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.825616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.825649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.826041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.826072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.826464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.826495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.826892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.826927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.827310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.827341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.827742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.827772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.828179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.828209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.828489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.828519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.828933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.828963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.829423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.829454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.829849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.829877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.830284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.830315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.830585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.830616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.831002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.831033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.831430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.831461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.831834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.831863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.832252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.832282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.832683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.832714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.833114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.833143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.833510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.833541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.833929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.833958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.834358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.834390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.834784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.834814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.835182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.835213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.835625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.835657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.836036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.836066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.836433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.836464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.836737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.836770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.837180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.837210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.837620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.837651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.838080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.838110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.838488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.838518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.838772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.838805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.839185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.839214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.176 [2024-07-15 13:15:18.839616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.176 [2024-07-15 13:15:18.839646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.176 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.840042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.840072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.840298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.840332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.840726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.840757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.841149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.841179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.841589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.841620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.842003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.842033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.842457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.842489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.842883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.842912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.843174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.843212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.843623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.843654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.844042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.844072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.844461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.844493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.844894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.844925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.845313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.845343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.845779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.845808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.846188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.846218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.846529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.846562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.846969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.846999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.847421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.847452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.847845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.847875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.848304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.848335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.848619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.848648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.849071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.849101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.849496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.849526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.849880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.849910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.850316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.850348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.850810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.850839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.851248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.851279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.851709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.851738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.852128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.852157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.852542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.852966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.852996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.853390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.853422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.853807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.853838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.854221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.854271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.854693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.854723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.855118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.855148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.855538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.855569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.855949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.855980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.856374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.856405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.177 qpair failed and we were unable to recover it. 00:29:57.177 [2024-07-15 13:15:18.856823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.177 [2024-07-15 13:15:18.856851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.857248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.857279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.857669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.857700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.858147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.858176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.858573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.858603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.858949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.858978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.859380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.859411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.859824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.859853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.860252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.860290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.860567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.860598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.860984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.861012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.861288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.861321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.861750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.861780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.862168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.862197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.862620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.862652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.863056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.863085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.863483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.863513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.863903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.863932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.864314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.864344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.864701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.864732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.865129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.865160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.865586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.865618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.866018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.866048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.866443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.866473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.866881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.866910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.867297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.867329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.867707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.867736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.868020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.868049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.868328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.868362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.868637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.868666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.869051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.869080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.869476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.869506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.869903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.870327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.870357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.870769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.870798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.871196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.871226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.871621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.871651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.871985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.872017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.872400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.872430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.872837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.872867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.178 qpair failed and we were unable to recover it. 00:29:57.178 [2024-07-15 13:15:18.873261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.178 [2024-07-15 13:15:18.873292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.873673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.873703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.874086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.874116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.874393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.874426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.874847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.874876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.875269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.875300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.875683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.875712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.876057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.876086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.876375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.876411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.876799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.876828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.877198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.877228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.877632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.877661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.877932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.877961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.878351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.878383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.878800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.878829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.879283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.879313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.879712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.879742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.880125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.880155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.880431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.880464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.880864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.880894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.881253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.881283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.881663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.881692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.882070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.882099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.882370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.882404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.882823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.883211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.883261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.883669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.883698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.884101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.884130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.884568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.884599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.884978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.885007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.885432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.885462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.885858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.886259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.886290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.886682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.886711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.887084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.887565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.887598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.887960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.887991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.888367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.888398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.888790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.888820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.889201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.889243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.889671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.889701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.179 qpair failed and we were unable to recover it. 00:29:57.179 [2024-07-15 13:15:18.890090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.179 [2024-07-15 13:15:18.890119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.890524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.890555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.890951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.891422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.891453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.891836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.891867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.892271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.892303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.892672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.892703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.893096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.893521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.893552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.893935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.893967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.894223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.894268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.894504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.894536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.894961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.894990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.895391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.895422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.895804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.895834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.896222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.896629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.896661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.897047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.897078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.897516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.897547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.897941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.897971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.898355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.898805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.898833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.899246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.899278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.899709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.899739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.900125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.900153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.900558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.900588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.900968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.901000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.901381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.901412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.901805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.901834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.902105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.902137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.902505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.902537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.902935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.902965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.903355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.903386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.903778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.903808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.904180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.904210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.904658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.180 [2024-07-15 13:15:18.904689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.180 qpair failed and we were unable to recover it. 00:29:57.180 [2024-07-15 13:15:18.905069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.905098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.905498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.905529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.905927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.905956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.906354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.906385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.906750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.906780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.907161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.907190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.907617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.907648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.908027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.908057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.908398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.908428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.908807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.908837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.909248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.909278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.909701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.909736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.910129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.910160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.910545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.910576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.910984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.911014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.911417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.911447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.911835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.911865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.912252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.912283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.912569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.912599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.913004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.913033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.913394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.913425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.913821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.913850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.914254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.914286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.914717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.914746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.915022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.915052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.915356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.915387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.915846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.915875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.916282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.916312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.916751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.916780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.917159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.917189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.917594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.917625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.917979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.918008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.918396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.918428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.918809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.918840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.919113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.919143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.919503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.919533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.919925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.919955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.920337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.920369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.920787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.920818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.921211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.921270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.181 [2024-07-15 13:15:18.921668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.181 [2024-07-15 13:15:18.921698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.181 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.922113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.922143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.922586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.922617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.923021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.923051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.923435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.923466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.923743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.923775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.924154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.924184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.924561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.924592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.924969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.924999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.925381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.925411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.925816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.925846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.926208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.926264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.926665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.926943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.926973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.927371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.927403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.927818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.927848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.928243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.928274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.928674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.928704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.929104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.929134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.929510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.929540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.929921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.929950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.930330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.930360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.930730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.930759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.931157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.931186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.931506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.931932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.931963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.932361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.932392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.932808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.932836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.933224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.933264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.933666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.933696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.934103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.934132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.934548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.934580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.934962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.934991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.935387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.935417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.935815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.935845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.936252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.936285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.936646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.936675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.937056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.937085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.937486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.937517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.937912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.938325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.938356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.182 [2024-07-15 13:15:18.938745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.182 [2024-07-15 13:15:18.938774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.182 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.939164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.939193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.939498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.939530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.939916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.939947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.940332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.940364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.940787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.940817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.941215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.941255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.941533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.941564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.941960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.941990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.942387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.942417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.942815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.942851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.943241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.943272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.943545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.943577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.943977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.944007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.944408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.944439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.944809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.944840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.945221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.945267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.945685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.945715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.946116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.946146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.946532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.946563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.946783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.946815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.947249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.947280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.947702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.947732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.948111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.948141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.948421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.948452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.948852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.948882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.949291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.949322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.949719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.949750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.950120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.950149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.950535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.950565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.950948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.950978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.951343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.951375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.951775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.951807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.952175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.952205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.952609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.952641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.953021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.953050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.953416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.953447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.953847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.953878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.954246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.954683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.954713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.955094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.955123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.183 qpair failed and we were unable to recover it. 00:29:57.183 [2024-07-15 13:15:18.955407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.183 [2024-07-15 13:15:18.955437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.955835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.955866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.956256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.956287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.956686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.956714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.957110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.957140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.957521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.957551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.957938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.957967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.958250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.958281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.958701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.958731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.959130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.959165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.959573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.959603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.959878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.960312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.960342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.960598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.961011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.961438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.961469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.961860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.961890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.962288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.962318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.962723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.962752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.963149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.963179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.963594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.963625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.964023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.964053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.964440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.964470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.964855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.964885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.965277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.965310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.965744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.965774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.966034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.966064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.966453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.966485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.966878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.966910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.967292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.967323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.967740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.967769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.968045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.968436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.968467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.968869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.968899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.969225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.969271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.969586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.969615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.970018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.970050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.970442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.970473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.970858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.970887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.971277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.971307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.184 [2024-07-15 13:15:18.971721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.184 [2024-07-15 13:15:18.971750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.184 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.972144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.972174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.972425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.972456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.972854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.972883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.973269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.973300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.973691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.973720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.974072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.974102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.974498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.974531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.974929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.974958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.975358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.975388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.975792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.975822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.976204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.976245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.976625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.976656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.976917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.976946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.977365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.977396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.977832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.977862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.978259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.978289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.978649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.978679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.979055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.979086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.979379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.979408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.979811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.979841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.980245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.980275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.980670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.980699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.981063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.981093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.981364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.981398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.981808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.981838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.185 [2024-07-15 13:15:18.982222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.185 [2024-07-15 13:15:18.982266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.185 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.982666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.982698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.983098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.983127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.983495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.983525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.983904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.983935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.984322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.984352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.984745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.984774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.985172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.985201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.985617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.986027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.986058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.986429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.986467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.986866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.986895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.987280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.987310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.987726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.987755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.988131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.988161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.988579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.988609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.989000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.989030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.989409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.989441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.989837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.989867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.990263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.990294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.990576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.990606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.991007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.991037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.991307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.991357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.991765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.991795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.992178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.992208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.992596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.992626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.993018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.993048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.993455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.993487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.993841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.993870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.994151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.994180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.994592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.994622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.995021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.995050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.995409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.995440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.995823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.995853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.457 [2024-07-15 13:15:18.996047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.457 [2024-07-15 13:15:18.996076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.457 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.996335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.996366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.996766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.996796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.996987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.997406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.997436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.997716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.997748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.998173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.998204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.998620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.998651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.999040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.999069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.999454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.999485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:18.999865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:18.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.000282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.000314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.000733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.000763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.001164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.001194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.001614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.001645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.001922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.001953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.002362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.002399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.002772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.002802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.003189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.003220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.003597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.003627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.003908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.003940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.004338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.004372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.004774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.004803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.005192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.005222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.005502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.005535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.005944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.005974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.006362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.006393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.006789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.006818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.007212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.007252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.007608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.007637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.008023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.008055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.008374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.008406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.008764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.008793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.009065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.009094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.009372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.009406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.458 [2024-07-15 13:15:19.009793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.458 [2024-07-15 13:15:19.009822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.458 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.010217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.010258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.010691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.010722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.011107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.011136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.011524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.011554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.011908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.011938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.012335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.012367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.012769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.012799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.013188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.013217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.013628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.013659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.014053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.014082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.014454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.014484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.014866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.014896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.015265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.015296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.015686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.015714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.016111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.016547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.016579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.016971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.017000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.017398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.017429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.017884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.017913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.018194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.018225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.018669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.018706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.019048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.019077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.019452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.019482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.019866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.019894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.020294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.020326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.020632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.020664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.021053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.021083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.021459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.021490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.021881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.021911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.022307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.022337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.022614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.022646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.023034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.023065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.023435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.023465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.023860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.023890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.459 [2024-07-15 13:15:19.024275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.459 [2024-07-15 13:15:19.024308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.459 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.024718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.024748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.025129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.025160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.025529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.025561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.025938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.025968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.026355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.026386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.026759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.026789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.027184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.027213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.027661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.027692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.028116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.028147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.028526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.028558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.028952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.028981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.029369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.029400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.029786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.029817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.030210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.030252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.030516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.030548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.030938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.030967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.031345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.031376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.031769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.031799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.032195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.032225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.032696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.032727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.033108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.033515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.033546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.033952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.033982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.034367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.034398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.034790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.034822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.035214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.035258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.035648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.035678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.036064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.036093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.036494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.036525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.036919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.036950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.037355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.037387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.037788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.037819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.038202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.038243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.460 qpair failed and we were unable to recover it. 00:29:57.460 [2024-07-15 13:15:19.038502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.460 [2024-07-15 13:15:19.038533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.038927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.038958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.039216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.039261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.039530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.039562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.039970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.039999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.040384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.040413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.040804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.040833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.041219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.041271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.041699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.041731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.042092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.042121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.042515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.042546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.042925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.042955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.043353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.043383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.043647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.043680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.044060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.044091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.044487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.044519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.044872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.044901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.045259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.045292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.045690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.045719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.046109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.046139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.046502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.046533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.046939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.046970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.047352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.047381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.047765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.047795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.048177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.048206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.048461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.048493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.048873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.048904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.049359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.049390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.049844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.049873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.050265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.050297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.050684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.050714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.461 [2024-07-15 13:15:19.051098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.461 [2024-07-15 13:15:19.051128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.461 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.051503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.051936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.051967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.052348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.052379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.052785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.052815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.053206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.053246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.053682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.054102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.054133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.054504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.054535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.054931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.054961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.055212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.055262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.055654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.055687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.056097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.056507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.056537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.056952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.056982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.057368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.057401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.057799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.057829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.058182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.058211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.058613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.058645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.058912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.058943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.059340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.059371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.059791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.059821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.060213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.060254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.060727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.060756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.061142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.061172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.061569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.061599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.061994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.062025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.062412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.062444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.062870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.062899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.063276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.063308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.063717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.063747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.064129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.064159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.064553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.064584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.064976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.065007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.065405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.462 [2024-07-15 13:15:19.065435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.462 qpair failed and we were unable to recover it. 00:29:57.462 [2024-07-15 13:15:19.065812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.065841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.066241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.066274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.066683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.066712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.067120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.067149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.067514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.067546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.067924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.067954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.068358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.068395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.068805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.068835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.069222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.069263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.069532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.069564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.069957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.069987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.070392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.070422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.070804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.070834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.071226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.071284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.071697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.072127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.072156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.072531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.072562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.072940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.072970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.073378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.073409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.073823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.073852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.074254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.074286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.074687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.074716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.075112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.075144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.075509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.075540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.075920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.076348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.076379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.076736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.076765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.077160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.077189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.077568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.077599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.077983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.078012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.078421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.078451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.078842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.078873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.079253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.079284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.079683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.079712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.080114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.080145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.080511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.080543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.080922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.080952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.463 [2024-07-15 13:15:19.081334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.463 [2024-07-15 13:15:19.081365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.463 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.081760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.081790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.082181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.082211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.082626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.082657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.083036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.083068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.083485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.083515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.083910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.083940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.084321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.084352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.084743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.084774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.085168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.085601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.085631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.086018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.086047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.086307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.086340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.086744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.086774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.089142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.089213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.089661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.089699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.090092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.090122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.090503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.090744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.090773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.091159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.091190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.091614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.091646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.091903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.091934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.092292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.092323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.092729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.092759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.093176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.093206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.093618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.093649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.094058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.094089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.094491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.094521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.094902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.094934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.095328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.095374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.095782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.096194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.096224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.096649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.096680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.097044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.097075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.097495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.097525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.097917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.097947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.464 qpair failed and we were unable to recover it. 00:29:57.464 [2024-07-15 13:15:19.098330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.464 [2024-07-15 13:15:19.098366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.098769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.098799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.099195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.099657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.099688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.100107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.100138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.100501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.100935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.100965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.101344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.101375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.101663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.101695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.102088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.102118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.102482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.102514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.102901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.102931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.103362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.103393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.103788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.103824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.104212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.104253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.104668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.104697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.105063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.105094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.105461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.105492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.105889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.105918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.106330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.106369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.108384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.108449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.108862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.108899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.109189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.109223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.111027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.111086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.111509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.111545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.111991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.112421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.112451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.112728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.112761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.113143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.113176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.113595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.113628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.115425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.115481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.115952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.115986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.116385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.116417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.116859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.116887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.117161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.117193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.118360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.118411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.465 [2024-07-15 13:15:19.118803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.465 [2024-07-15 13:15:19.118837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.465 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.119228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.119272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.119698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.119728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.120101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.120132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.120568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.120600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.120998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.121028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.121321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.121354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.121745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.122124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.122155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.122548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.122580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.122976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.123007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.123401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.123433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.123827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.123857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.124253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.124284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.124696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.124727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.125072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.125102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.125393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.125424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.125847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.125882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.126117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.126150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.126542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.126573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.126953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.126983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.127379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.127410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.127773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.127805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.128185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.128214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.128666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.128698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.129043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.129072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.129457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.129488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.129871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.129901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.130171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.130202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.466 qpair failed and we were unable to recover it. 00:29:57.466 [2024-07-15 13:15:19.130623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.466 [2024-07-15 13:15:19.130655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.131071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.131101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.131526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.131558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.131938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.131968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.132342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.132373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.132646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.132675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.133057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.133086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.133448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.133480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.133879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.133908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.134303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.134336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.134747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.134777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.135168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.135198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.135625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.135657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.136048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.136078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.136457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.136488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.136873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.136905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.137314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.137347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.137728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.137759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.138143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.138172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.138572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.138607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.139005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.139034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.139412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.139443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.139826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.139856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.140254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.140286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.140694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.140723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.140982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.141016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.141427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.141458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.141743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.141772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.142172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.142208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.142632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.142662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.143042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.143071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.143446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.143477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.143862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.143892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.144302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.144332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.144705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.144735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.145111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.145141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.467 [2024-07-15 13:15:19.145605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.467 [2024-07-15 13:15:19.145636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.467 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.146040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.146071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.146475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.146505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.146772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.146802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.147218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.147272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.147679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.147709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.148095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.148124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.148524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.148555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.148950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.148980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.149350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.149380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.149771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.149801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.150002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.150035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 888213 Killed "${NVMF_APP[@]}" "$@" 00:29:57.468 [2024-07-15 13:15:19.150433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.150895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.150925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.468 [2024-07-15 13:15:19.151313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.151344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.468 [2024-07-15 13:15:19.151728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.151758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.468 [2024-07-15 13:15:19.152042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.152072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.468 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.468 [2024-07-15 13:15:19.152480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.152512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.152898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.152929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.153320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.153351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.153762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.153793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.154147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.154178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.154480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.154511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.154896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.155326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.155356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.155732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.155764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.156146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.156176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.156565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.156595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.156992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.157021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.157404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.157434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.157845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.157875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.158264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.158295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.158696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.158727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.159105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.159135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.468 [2024-07-15 13:15:19.159530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.468 [2024-07-15 13:15:19.159562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.468 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.159914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.159944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.160282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.160314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=889216 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 889216 00:29:57.469 [2024-07-15 13:15:19.160739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.160770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 889216 ']' 00:29:57.469 [2024-07-15 13:15:19.161160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.161190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.469 [2024-07-15 13:15:19.161511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.161550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.469 [2024-07-15 13:15:19.161725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.161755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.469 [2024-07-15 13:15:19.162141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.469 [2024-07-15 13:15:19.162172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 13:15:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.469 [2024-07-15 13:15:19.162443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.162477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.162883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.162915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.163194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.163227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.163533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.163568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.163953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.163984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.164349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.164381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.164766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.164799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.165041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.165071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.165468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.165500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.165898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.165930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.166327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.166365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.166800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.166831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.167253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.167286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.167589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.167620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.167885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.168322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.168355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.168633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.168666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.169052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.169083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.169332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.169364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.169781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.169814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.170050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.170080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.170476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.170507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.170660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.170694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.171081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.469 [2024-07-15 13:15:19.171112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.469 qpair failed and we were unable to recover it. 00:29:57.469 [2024-07-15 13:15:19.171555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.171589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.171972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.172003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.172388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.172420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.172841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.172870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.173258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.173289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.173743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.173772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.174053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.174083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.174290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.174320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.174629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.174661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.174963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.174993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.175388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.175420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.175834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.175864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.176258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.176288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.176707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.176737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.177141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.177171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.177431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.177463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.177868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.177899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.178294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.178327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.178714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.178745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.179134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.179165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.179535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.179567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.179962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.179997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.180306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.180337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.180776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.180808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.181093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.181124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.181497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.181531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.181919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.181964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.182399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.182430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.182846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.182877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.183165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.183197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.183615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.183646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.184042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.184073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.184477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.184508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.184780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.184812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.185196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.185226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.185672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.470 [2024-07-15 13:15:19.185702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.470 qpair failed and we were unable to recover it. 00:29:57.470 [2024-07-15 13:15:19.186088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.186119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.186522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.186553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.186917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.186948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.187337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.187368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.187757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.187788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.188037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.188068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.188442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.188475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.188897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.188927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.189181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.189210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.189603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.189634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.189908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.189940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.190338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.190369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.190783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.191204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.191243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.191584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.191614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.192010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.192040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.192436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.192467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.192855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.192886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.193278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.193309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.193574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.193607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.193998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.194026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.194416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.194446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.194854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.194884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.195288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.195320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.195729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.195760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.196147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.196178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.196592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.196625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.197026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.197056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.197449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.197480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.197867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.471 [2024-07-15 13:15:19.197897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.471 qpair failed and we were unable to recover it. 00:29:57.471 [2024-07-15 13:15:19.198296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.198333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.198727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.198756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.199136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.199165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.199538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.199570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.199970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.200000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.200365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.200396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.200685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.200715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.201100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.201129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.201546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.201577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.201973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.202003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.202388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.202419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.202676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.202708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.203104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.203133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.203426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.203456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.203887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.203917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.204321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.204351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.204770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.204800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.205177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.205208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.205633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.205663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.205945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.205975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.206371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.206401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.206796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.206825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.207219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.207263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.207623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.207652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.208103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.208132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.208507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.208538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.208923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.209393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.209425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.209820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.209850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.210296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.210327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.210669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.210699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.210913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.210943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.211220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.211262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.211662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.211692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.212086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.212115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.212494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.212524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.472 [2024-07-15 13:15:19.212893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.472 [2024-07-15 13:15:19.212923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.472 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.213335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.213364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.213757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.213786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.214165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.214196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.214585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.214623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.214984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.215014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.215403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.215433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.215831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.215860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.216137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.216166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.216441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.216471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.216732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.216762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.217173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.217203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.217590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.217620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.218022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.218051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.218452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.218482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.218536] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:29:57.473 [2024-07-15 13:15:19.218596] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.473 [2024-07-15 13:15:19.218874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.218903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.219273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.219307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.219749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.219779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.220177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.220207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.220671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.220703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.221008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.221039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.221305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.221339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.221644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.221675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.221921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.221951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.222223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.222265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.222705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.222736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.223126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.223156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.223547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.223579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.223974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.224004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.224409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.224440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.224830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.224860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.225251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.225706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.225735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.226149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.226179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.226387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.226422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.226856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.226887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.227319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.227350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.227780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.473 [2024-07-15 13:15:19.227811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.473 qpair failed and we were unable to recover it. 00:29:57.473 [2024-07-15 13:15:19.228167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.228197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.228417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.228453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.228711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.228743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.229139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.229169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.229561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.229593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.230026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.230057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.230431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.230462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.230821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.230851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.231285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.231317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.231728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.231759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.232038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.232067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.232464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.232495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.232749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.232779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.233162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.233192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.233597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.233628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.234024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.234053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.234371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.234402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.234797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.234826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.235189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.235225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.235623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.235653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.236049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.236078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.236534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.236565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.236922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.236951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.237355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.237385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.237776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.237805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.238191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.238221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.238602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.238631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.239039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.239069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.239440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.239471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.239766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.239795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.240196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.240225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.240634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.240666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.241098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.241128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.241523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.241970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.241998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.242397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.242428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.242823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.242855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.243251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.243280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.243693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.474 [2024-07-15 13:15:19.243722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.474 qpair failed and we were unable to recover it. 00:29:57.474 [2024-07-15 13:15:19.244084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.244113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.244505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.244534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.244899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.244929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.245354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.245385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.245812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.245841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.246243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.246273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.246564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.246595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.246830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.246861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.247265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.247297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.247782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.247812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.248199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.248240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.248642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.248672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.249074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.249105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.249395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.249425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.249834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.249864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.250148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.250178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.250433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.250463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.250833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.250862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.251255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.251286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.251740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.251775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.252169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.252198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.252659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.252690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.253086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.253116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.253365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.253396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.253795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.253824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.254215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.254266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.254509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.254538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.254936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.254965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.255371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.255401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.255805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.255835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.256221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.256263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.256645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.256673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.257074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.257103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.257504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.257537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.257796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.257828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.258104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.258133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.258499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.475 [2024-07-15 13:15:19.258530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.475 qpair failed and we were unable to recover it. 00:29:57.475 [2024-07-15 13:15:19.258917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.258946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.259335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.259365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.259725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.259755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.476 [2024-07-15 13:15:19.260152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.260182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.260552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.260582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.260974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.261003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.261390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.261422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.261840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.261870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.262264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.262294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.262686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.263112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.263141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.263403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.263435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.263834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.263863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.264069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.264099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.264491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.264521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.264921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.264951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.265347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.265377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.265775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.265805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.266215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.266255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.266639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.266668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.267061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.267091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.267361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.267391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.267685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.267721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.268077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.268106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.268504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.268534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.268921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.268950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.269365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.269397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.269788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.269818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.270209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.476 [2024-07-15 13:15:19.270256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.476 qpair failed and we were unable to recover it. 00:29:57.476 [2024-07-15 13:15:19.270597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.477 [2024-07-15 13:15:19.270628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.477 qpair failed and we were unable to recover it. 00:29:57.477 [2024-07-15 13:15:19.270883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.477 [2024-07-15 13:15:19.270915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.477 qpair failed and we were unable to recover it. 00:29:57.477 [2024-07-15 13:15:19.271284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.477 [2024-07-15 13:15:19.271314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.477 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.271576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.271608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.272001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.272033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.272415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.272446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.272846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.272876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.273195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.273226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.273619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.273651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.274050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.274080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.274442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.274473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.274863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.274893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.275277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.275309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.275722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.275751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.276153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.276183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.276466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.276496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.276955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.276985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.277365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.277398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.277687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.277716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.278115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.278144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.278533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.278564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.278972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.279001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.279402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.279432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.279835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.279865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.280246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.280277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.280636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.280665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.281034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.281064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.281334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.281364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.281761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.281791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.282191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.282220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.282620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.282651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.283040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.283069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.283447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.283477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.283876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.283911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.284277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.284307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-07-15 13:15:19.284584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-07-15 13:15:19.284613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.284993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.285024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.285396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.285429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.285846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.285875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.286302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.286334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.286743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.286773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.287189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.287218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.287618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.287651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.288003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.288034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.288415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.288446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.288856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.288886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.289170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.289199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.289685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.289715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.290122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.290151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.290516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.290546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.290879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.290910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.291277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.291308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.291692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.291722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.292119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.292148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.292515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.292547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.292932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.292962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.293346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.293376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.293772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.293803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.294199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.294239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.294644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.294675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.295057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.295088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.295371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.295402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.295784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.295813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.296202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.296244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.296654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.296685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.297087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.297117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.297485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.297517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.297901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.297932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.298172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.298201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.298573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.298605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.299005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.299035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.299422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.299453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.299839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.299870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.300263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.300299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.300691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.300721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.301069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.301098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-07-15 13:15:19.301513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-07-15 13:15:19.301544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.301937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.302371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.302401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.302800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.302830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.303187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.303637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.303666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.304072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.304103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.304376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.304407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.304693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.304724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.305074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.305104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.305486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.305518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.305981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.306012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.306396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.306427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.306843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.306872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.307095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.307125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.307555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.307587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.307995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.308026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.308427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.308457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.308855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.308885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.309278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.309308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.309566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.309599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.309996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.310026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.310333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.310364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.310808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.310838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.311225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.311626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.311980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.312011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.312379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.312410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.312650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.312680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.313043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.313078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.313471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.313501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.313897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.313928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.314154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.314486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.314517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.314923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.314952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.315354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.315385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.315786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.315815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.316089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.316122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.316490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.316527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.316925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.316954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.317175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.317204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.317635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-07-15 13:15:19.317666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-07-15 13:15:19.317908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.317938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.318223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.318266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.318638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.318667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.318703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.753 [2024-07-15 13:15:19.319069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.319493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.319526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.319765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.319794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.320178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.320208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.320501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.320533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.320939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.320970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.321382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.321704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.321737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.322147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.322177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.322587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.322618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.322865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.322894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.323279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.323311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.323692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.323722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.324124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.324153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.324571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.324602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.324856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.324886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.325355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.325809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.325840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.326245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.326275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.326573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.326606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.326966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.326996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.327420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.327460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.327912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.327940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.328251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.328281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.328685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.328997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.329025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.329297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.329326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.329772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.329800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.330137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.330165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.330432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.330462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.330933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.330961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.331354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.331382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.331716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.331743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.332164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.332192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.332649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.332677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.332937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.332964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.333359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.333387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-07-15 13:15:19.333828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-07-15 13:15:19.333855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.334243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.334273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.334699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.334726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.335121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.335546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.335575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.335848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.335875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.336292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.336321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.336681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.336709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.336978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.337005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.337249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.337283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.337604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.337631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.338091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.338119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.338527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.338555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.338926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.338956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.339424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.339455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.339820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.339849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.340159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.340188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.340658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.340689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.341080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.341110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.341507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.341536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.341937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.341966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.342371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.342401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.342806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.342835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.343173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.343204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.343624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.343655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.344015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.344045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.344506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.344537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.344922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.344951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.345191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.345221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.345657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.345687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.346081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.346111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.346517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.346547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.346948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.346977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-07-15 13:15:19.347378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-07-15 13:15:19.347409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.347801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.347827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.348218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.348260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.348705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.348735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.348965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.348995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.349365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.349395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.349812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.349842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.350119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.350550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.350580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.350941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.350970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.351341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.351372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.351769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.351798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.352203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.352243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.352642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.352672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.353058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.353087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.353463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.353494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.353688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.353718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.354105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.354136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.354505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.354536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.354936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.354966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.355374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.355405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.355683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.355716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.355840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.355867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.356262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.356293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.356699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.356729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.357112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.357141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.357378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.357408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.357800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.357830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.358240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.358271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.358681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.358709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.358970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.359000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.359390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.359421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.359848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.359877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.360265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.360296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.360783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.360811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.361214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.361255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.361500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.361531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.361774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.361804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.362260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.362292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.362572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.362604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.362989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-07-15 13:15:19.363020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-07-15 13:15:19.363396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.363812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.363842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.364207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.364256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.364526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.364556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.364937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.364966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.365350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.365381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.365658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.365971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.366001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.366402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.366435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.366920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.366950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.367348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.367379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.367813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.367843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.368315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.368346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.368763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.368794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.369050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.369082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.369337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.369368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.369755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.369785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.370169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.370199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.370601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.370632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.370999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.371030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.371428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.371460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.371851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.371881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.372283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.372313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.372610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.372641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.373032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.373062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.373447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.373479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.373876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.373906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.374309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.374340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.374688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.374717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.375104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.375134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.375531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.375563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.375961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.375990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.376389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.376420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.376826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.376855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.377257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.377286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.377755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.377784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.378182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.378211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.378630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.378661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.379106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.379137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.379511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.379542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.379923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-07-15 13:15:19.379953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-07-15 13:15:19.380341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.380372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.380770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.380806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.381211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.381251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.381535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.381567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.381948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.381978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.382346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.382377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.382794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.382824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.383097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.383126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.383501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.383533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.383935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.383964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.384377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.384408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.384821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.384850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.385214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.385255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.385675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.385705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.385974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.386004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.386378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.386409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.386787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.386817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.387223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.387263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.387700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.387730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.388125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.388155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.388544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.388576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.388969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.388998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.389404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.389693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.389722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.390141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.390171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.390576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.390607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.757 qpair failed and we were unable to recover it. 00:29:57.757 [2024-07-15 13:15:19.391009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.757 [2024-07-15 13:15:19.391039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.391292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.391324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.391727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.391758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.392155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.392185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.392592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.392623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.393018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.393047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.393432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.393463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.393852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.393881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.394284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.394315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.394720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.395134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.395164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.395589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.395620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.396030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.396059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.396430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.396460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.396883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.396914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.397167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.397202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.397607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.397639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.398024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.398053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.398446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.398477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.398876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.398906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.399164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.399195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.399614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.399645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.399884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.399914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.400325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.400355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.400789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.400818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.401212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.401252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.401530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.401560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.401968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.401998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.402406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.402437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.402712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.402742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.403130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.403159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.403429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.403458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.403866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.403896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.404364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.404396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.404641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.404673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.404942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.404973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.405347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.405378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.405768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.405798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.406228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.406271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.406402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.406430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.406808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.758 [2024-07-15 13:15:19.406838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.758 qpair failed and we were unable to recover it. 00:29:57.758 [2024-07-15 13:15:19.407119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.407148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.407417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.407452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.407739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.408182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.408211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.408454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.408484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.408868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.408897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.409296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.409326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.409724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.409753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.410140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.410170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.410552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.410583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.410984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.411013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.411404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.411434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.411833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.411862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.412262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.412294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.412655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.412692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.413101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.413132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.413495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.413525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.413591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.759 [2024-07-15 13:15:19.413640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.759 [2024-07-15 13:15:19.413648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.759 [2024-07-15 13:15:19.413656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.759 [2024-07-15 13:15:19.413662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.759 [2024-07-15 13:15:19.413684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.413715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.413730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.759 [2024-07-15 13:15:19.413874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.759 [2024-07-15 13:15:19.413989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.414017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.414049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.759 [2024-07-15 13:15:19.414050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:57.759 [2024-07-15 13:15:19.414448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.414480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.414873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.414902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.415161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.415191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.415613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.415644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.416047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.416076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.416297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.416335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.416750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.416779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.417192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.417221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.417560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.417590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.418061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.418092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.418495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.418525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.418928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.418957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.419402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.419433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.419818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.419847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.420081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.420110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.420273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.420303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.420702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.420731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.420949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.420979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.759 [2024-07-15 13:15:19.421387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.759 [2024-07-15 13:15:19.421417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.759 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.421835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.421866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.422266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.422296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.422702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.422732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.423121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.423150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.423543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.423572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.423980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.424009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.424294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.424325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.424730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.424760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.425013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.425042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.425445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.425475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.425866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.425896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.426177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.426209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.426630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.426660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.426963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.426993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.427395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.427425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.427870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.427899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.428276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.428308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.428731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.428761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.429151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.429180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.429596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.429627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.430031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.430060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.430468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.430499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.430863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.430894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.431062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.431090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.431402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.431439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.431842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.431872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.432137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.432173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.432611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.432643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.433035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.433065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.433483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.433516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.433759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.433789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.434061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.434089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.434563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.434594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.434999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.435029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.435410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.435440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.435905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.435934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.436344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.436375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.436628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.436658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.437057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.437087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.437372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.760 [2024-07-15 13:15:19.437407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.760 qpair failed and we were unable to recover it. 00:29:57.760 [2024-07-15 13:15:19.437858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.437888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.438148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.438178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.438635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.438666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.438939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.439376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.439406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.439815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.439845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.440249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.440280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.440671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.440700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.440855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.440883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.441158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.441187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.441477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.441508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.441892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.441920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.442320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.442351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.442742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.442772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.443169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.443200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.443588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.443619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.444051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.444079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.444208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.444256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.444520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.444550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.444930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.444960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.445361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.445392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.445818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.445849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.446250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.446282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.446676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.446705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.447110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.447141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.447374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.447406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.447636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.447672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.448062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.448091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.448495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.448525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.448808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.448845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.449115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.449145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.449551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.449581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-15 13:15:19.449988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.761 [2024-07-15 13:15:19.450017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.450261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.450294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.450701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.450731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.451137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.451167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.451343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.451374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.451791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.451820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.452252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.452283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.452714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.452747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.453163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.453197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.453474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.453504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.453896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.453927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.454321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.454352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.454772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.454801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.455207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.455248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.455652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.455682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.456073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.456103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.456493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.456524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.456660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.456690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.457092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.457121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.457439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.457469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.457865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.457894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.458266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.458299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.458595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.458627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.459010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.459039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.459161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.459188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.459613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.459643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.459921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.459951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.460188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.460218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.460686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.460717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.461132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.461160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.461562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.461593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.461835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.461868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.462112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.462141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.462583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.463051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.463086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.463464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.463494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.463904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.463933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.464376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.464407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.464655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.464684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.465069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.465099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.465495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.465526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-15 13:15:19.465750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.762 [2024-07-15 13:15:19.465779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.466135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.466387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.466418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.466852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.466881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.467249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.467280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.467660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.467689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.468080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.468112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.468499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.468529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.468663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.468689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.468968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.468996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.469278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.469310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.469733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.469763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.470146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.470175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.470598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.470907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.470939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.471217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.471258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.471610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.472042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.472073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.472449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.472479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.472881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.472911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.473321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.473358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.473738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.473767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.474153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.474185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.474588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.474620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.474866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.474895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.475302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.475333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.475738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.475769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.476043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.476073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.476348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.476377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.476766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.476795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.477022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.477051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.477478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.477508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.477786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.477815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.478208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.478245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.478630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.478661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.478932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.478961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.479197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.479226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.479574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.479604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.479990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.480021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.480282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.480315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.480710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.480739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-15 13:15:19.481019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.763 [2024-07-15 13:15:19.481048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.481441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.481473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.481878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.481908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.482307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.482338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.482742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.482772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.483157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.483187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.483612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.483643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.484052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.484083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.484339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.484370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.484657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.484690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.484939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.484969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.485341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.485373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.485613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.485643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.486032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.486062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.486418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.486449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.486837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.486867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.487255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.487286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.487643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.487672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.488073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.488104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.488489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.488525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.488644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.489049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.489079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.489485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.489515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.489920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.489949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.490352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.490383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.490761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.490791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.491014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.491043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.491432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.491463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.491855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.491884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.492271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.492302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.492715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.492743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.493147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.493176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.493610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.493641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.494024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.494053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.494456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.494486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.494890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.494920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.495324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.495355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.495595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.495625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.496036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.496064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.496480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.496513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.496904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.496934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.764 qpair failed and we were unable to recover it. 00:29:57.764 [2024-07-15 13:15:19.497310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.764 [2024-07-15 13:15:19.497340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.497747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.497776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.498182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.498211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.498473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.498504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.498888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.498918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.499333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.499364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.499769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.499798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.500184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.500213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.500615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.500644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.501054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.501084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.501488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.501517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.501950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.501979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.502346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.502377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.502784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.502814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.503260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.503291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.503693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.503722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.504090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.504119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.504525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.504554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.504950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.504986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.505390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.505419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.505824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.505853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.506127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.506157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.506597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.506627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.507015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.507044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.507440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.507472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.507745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.507775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.508176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.508205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.508512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.508543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.508791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.508821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.509060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.509089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.509329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.509362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.509626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.509656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.510072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.510104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.510493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.510523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.510963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.511424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.765 [2024-07-15 13:15:19.511454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.765 qpair failed and we were unable to recover it. 00:29:57.765 [2024-07-15 13:15:19.511845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.511875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.512274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.512304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.512694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.512723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.512945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.512975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.513375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.513405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.513779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.513808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.514210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.514249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.514471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.514501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.514900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.514930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.515341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.515372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.515813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.515842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.516251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.516563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.516592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.517003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.517033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.517282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.517311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.517554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.517584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.517906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.517936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.518345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.518375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.518507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.518537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.518788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.518818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.519208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.519248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.519624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.519654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.519938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.519973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.520190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.520219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.520620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.520650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.520928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.520960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.521357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.521388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.521783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.521812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.522080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.522113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.522492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.522523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.522926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.522956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.523179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.523208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.523348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.523378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.523771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.523800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.524206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.524247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.524632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.524661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.525067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.525096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.525201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.525228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.525517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.525547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.525931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.766 [2024-07-15 13:15:19.525961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.766 qpair failed and we were unable to recover it. 00:29:57.766 [2024-07-15 13:15:19.526347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.526377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.526788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.526818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.527220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.527661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.527691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.528073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.528102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.528484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.528516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.528764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.528795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.529041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.529070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.529412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.529442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.529860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.529890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.530294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.530324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.530722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.530752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.531211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.531250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.531622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.531652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.532059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.532089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.532572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.532605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.533000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.533031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.533410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.533441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.533844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.533876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.534162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.534191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.534505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.534538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.534660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.534690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.534951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.534987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.535393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.535423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.535827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.535858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.536273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.536304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.536698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.536728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.537009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.537038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.537284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.537315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.537751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.537780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.538162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.538485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.538516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.538739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.538769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.539166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.539197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.539616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.539648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.540047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.540077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.540471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.540503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.540907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.540938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.541359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.541392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.541636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.541665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.542057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.767 [2024-07-15 13:15:19.542087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.767 qpair failed and we were unable to recover it. 00:29:57.767 [2024-07-15 13:15:19.542331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.542362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.542792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.542822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.543016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.543046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.543447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.543479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.543743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.543774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.544057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.544088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.544472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.544506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.544780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.544810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.545222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.545263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.545533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.545563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.545967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.545999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.546429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.546459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.546860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.546889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.547304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.547337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.547565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.547595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.547976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.548006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.548408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.548438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.548839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.548873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.549145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.549175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.549580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.549610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.549987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.550017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.550415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.550452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.550817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.550847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.551063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.551093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.551526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.551557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.551837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.551868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.552104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.552135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.552546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.552578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.552980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.553009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.553397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.553427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.553837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.553868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.554339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.554370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.554723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.554753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.555114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.555144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.555449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.555480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.555877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.555907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.556313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.556344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.556755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.556784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.557174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.557203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.557607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.557636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.558035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.768 [2024-07-15 13:15:19.558064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.768 qpair failed and we were unable to recover it. 00:29:57.768 [2024-07-15 13:15:19.558323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.558354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.558839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.558867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.559257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.559289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.559573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.559603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.559972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.560001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.560404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.560436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.560689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.560720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.561081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.561111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.561498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.561529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.561936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.561966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.562427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.562459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.562737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.562766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.563165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.563194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.563588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.563620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:57.769 [2024-07-15 13:15:19.564028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.769 [2024-07-15 13:15:19.564058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:57.769 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.564429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.564462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.564874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.564906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.565293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.565323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.565752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.565782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.566091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.566121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.566547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.566583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.566829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.566861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.567251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.567282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.567612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.567641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.568020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.568051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.568445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.568475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.568860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.568891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.569175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.569204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.569627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.569658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.570027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.570057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.570443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.570473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.570881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.570911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.571319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.571348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.571754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.571784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.572185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.572215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.572618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.572648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.572925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.572957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.573359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.573391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.573795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.573823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-07-15 13:15:19.574196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-07-15 13:15:19.574226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.574518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.574549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.574942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.574971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.575377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.575408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.575644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.575673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.575906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.575935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.576330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.576362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.576757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.576787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.577191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.577223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.577679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.577710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.578106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.578137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.578529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.578560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.578961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.578989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.579413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.579443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.579825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.579855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.580264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.580294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.580524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.580555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.580939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.580968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.581374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.581405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.581653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.581683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.582110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.582139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.582384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.582421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.582827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.582857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.583252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.583283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.583718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.583747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.584116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.584146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.584539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.584569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.584959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.584989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.585395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.585426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.585845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.585876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.586247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.586279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.586670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.586699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.587114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.587143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.587520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.587551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.587936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.587965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.588352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.588382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.588683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.588712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.589126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.589155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.589531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.589561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.589953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.589983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.590393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-07-15 13:15:19.590424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-07-15 13:15:19.590657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.590686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.590852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.590880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.591149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.591178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.591564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.591595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.592000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.592028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.592227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.592270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.592659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.592957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.592990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.593268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.593302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.593696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.594119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.594148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.594527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.594558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.594969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.594998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.595348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.595380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.595785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.595814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.596219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.596260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.596701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.596731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.597131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.597159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.597531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.597562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.597841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.597873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.598157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.598192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.598612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.598643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.599028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.599058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.599168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.599195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.599309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.599340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.599724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.599996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.600025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.600125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.600529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.600559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.600812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.600842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.601283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.601315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.601724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.601754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.602167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.602197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.602628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.602657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.603051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.603082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.603491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.603521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.603928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.603957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.604357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.604387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.604816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.604844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.605252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.605282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.605674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-07-15 13:15:19.605704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-07-15 13:15:19.606103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.606131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.606525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.606556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.606801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.606831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.607200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.607228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.607630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.607659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.607882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.607912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.608122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.608151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.608585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.608615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.609009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.609038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.609431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.609463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.609861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.609890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.610159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.610192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.610465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.610498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.610884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.610914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.611139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.611169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.611549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.611580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.611977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.612006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.612259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.612291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.612423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.612447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.612710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.612746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.613125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.613154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.613546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.613578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.613980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.614009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.614418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.614449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.614651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.614680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.614914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.614943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.615318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.615372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.615650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.615683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.616095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.616123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.616524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.616554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.616957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.616987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.617431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.617461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.617851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.617880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.618278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.618309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.618753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.618783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.619189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.619218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.619616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.619647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.619904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.619932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.620346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.620378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.620747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-07-15 13:15:19.620776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-07-15 13:15:19.621179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.621208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.621628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.622020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.622050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.622321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.622703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.622734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.623127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.623156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.623569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.623602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.624007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.624036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.624275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.624306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.624669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.624698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.625116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.625145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.625475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.625506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.625973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.626002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.626474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.626505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.626920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.626950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.627182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.627213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.627498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.627527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.627878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.627908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.628323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.628353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.628736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.628770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.629146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.629176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.629473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.629503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.629906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.629935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.630339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.630369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.630753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.631108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.631137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.631521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.631552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.631913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.631942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.632336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.632366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.632785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.632815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.633165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.633194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.633607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.633638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.633997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.634026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-07-15 13:15:19.634423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-07-15 13:15:19.634453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.634856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.634886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.635299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.635329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.635608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.635637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.635912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.635942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.636338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.636369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.636600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.636629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.637012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.637042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.637430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.637461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.637743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.637772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.638180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.638208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.638488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.638519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.638921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.638951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.639185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.639215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.639629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.639658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.639925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.639956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.640342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.640372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.640634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.640664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.640879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.640909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.641151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.641180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.641605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.641636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.641858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.641887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.642260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.642290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.642679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.642709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.643087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.643116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.643506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.643537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.643986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.644022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.644381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.644412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.644685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.644714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.645083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.645111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.645491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.645522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.645911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.645942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.646375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.646406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.646644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.646673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.647037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.647067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.647445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.647474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.647691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.647721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.648139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.648167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.648405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.648436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.648820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.648849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.649267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.649299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-07-15 13:15:19.649703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-07-15 13:15:19.649734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.650137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.650165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.650403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.650435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.650895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.650924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.651328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.651358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.651742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.651772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.652113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.652143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.652532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.652563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.652963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.652992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.653243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.653276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.653564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.653594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.653977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.654006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.654410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.654442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.654855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.654884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.655283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.655313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.655715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.655746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.656153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.656183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.656591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.656622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.657019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.657048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.657446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.657478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.657875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.657906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.658269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.658301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.658552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.658582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.658967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.658997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.659402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.659433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.659685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.659722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.660103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.660132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.660579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.660611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.661011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.661040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.661450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.661481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.661872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.661900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.662375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.662405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.662766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.662795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.663223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.663262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.663727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.663757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.664025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.664055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.664417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.664447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.664848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.664878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.665275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.665306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.665579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.665610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.666021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.666050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-07-15 13:15:19.666330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-07-15 13:15:19.666360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.666615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.666644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.667035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.667064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.667477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.667507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.667785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.667814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.668203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.668242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.668606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.668635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.669038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.669069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.669281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.669311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.669769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.669799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.670215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.670496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.670529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.670923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.670953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.671344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.671375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.671749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.671779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.672179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.672209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.672438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.672470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.672582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.672609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.672987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.673015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.673423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.673455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.673892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.673922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.674322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.674354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.674758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.674787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.675193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.675222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.675501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.675530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.675925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.675955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.676345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.676376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.676731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.676760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.676990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.677020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.677252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.677283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.677576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.677605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.678012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.678042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.678452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.678484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.678871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.678901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.679300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.679331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.679620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.679649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.679843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.679874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.680255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.680286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.680579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.680608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.681010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.681039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.681446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.681476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-07-15 13:15:19.681911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-07-15 13:15:19.681942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.682334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.682366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.682744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.682773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.683178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.683208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.683480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.683511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.683909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.683939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.684341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.684371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.684754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.684783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.685178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.685207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.685460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.685490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.685844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.685878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.686290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.686321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.686726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.686756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.687148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.687177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.687583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.687615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.687973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.688004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.688383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.688413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.688808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.688837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.689073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.689103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.689356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.689386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.689779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.689809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.690197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.690226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.690501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.690532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.690936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.690965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.691361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.691392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.691636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.691665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.692084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.692113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.692490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.692520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.692915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.692944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.693046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.693070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.693436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.693467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.693871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.693900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.694023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.694055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.694289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.694321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.694739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.694769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.695005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-07-15 13:15:19.695035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-07-15 13:15:19.695430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.695461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.695835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.695865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.696092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.696121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.696520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.696550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.696938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.696967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.697199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.697228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.697647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.697676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.698099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.698128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.698527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.698557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.698952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.698981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.699395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.699426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.699707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.699739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.700133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.700163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.700624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.700655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.700888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.700923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.701097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.701124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.701363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.701392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.701832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.701861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.702249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.702280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.702522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.702551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.702768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.702797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.703200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.703248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.703615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.703646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.704009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.704038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.704441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.704472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.704743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.704772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.705183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.705212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.705597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.705627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.706019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.706051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.706450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.706480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.706907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.706936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.707186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.707215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.707577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.707607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.708000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.708029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.708433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.708464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.708864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.708894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.709290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.709319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.709720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.709749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.710154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.710184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.710352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.710387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-07-15 13:15:19.710806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-07-15 13:15:19.710835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.711102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.711135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.711412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.711442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.711720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.711752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.712150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.712180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.712498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.712529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.712982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.713012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.713412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.713442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.713716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.713745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.713987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.714017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.714266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.714296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.714737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.714765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.715047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.715078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.715494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.715525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.715820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.715855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.716244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.716275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.716409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.716435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.716814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.716842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.717252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.717284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.717730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.717759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.718161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.718190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.718576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.718606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.719025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.719056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.719497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.719528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.719772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.719804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.720199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.720248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.720660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.720689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.721098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.721127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.721535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.721565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.721955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.721984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.722387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.722417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.722840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.722869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.723262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.723293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.723573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.723602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.724009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.724038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.724415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.724448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.724851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.724881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.725161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.725190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.725603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.725634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.726041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.726071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.726489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.726520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-07-15 13:15:19.726922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-07-15 13:15:19.726952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.727367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.727398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.727662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.727694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.728083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.728112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.728510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.728541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.728951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.728980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.729343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.729374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.729775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.729804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.730049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.730080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.730307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.730339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.730752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.730782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.731181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.731210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.731612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.731643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.732054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.732095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.732479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.732510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.732759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.732789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.733185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.733214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.733607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.733637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.734037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.734066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.734449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.734480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.734867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.734897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.735303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.735333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.735727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.735757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.736149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.736179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.736559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.736589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.736870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.736902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.737304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.737334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.737751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.737780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.738174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.738204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.738672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.738704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.739111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.739527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.739557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.739817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.739848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.740259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.740290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.740692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.740722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.740981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.741012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.741436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.741467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.741869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.741898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.742190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.742219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.742626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.742656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.743124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.743155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-07-15 13:15:19.743443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-07-15 13:15:19.743474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.743732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.743763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.744148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.744178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.744551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.744580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.744982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.745011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.745381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.745412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.745800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.745829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.746219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.746260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.746577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.746606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.747010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.747039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.747439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.747706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.747735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.748091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.748126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.748495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.748526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.748759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.748789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.749182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.749212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.749626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.749656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.750062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.750091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.750509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.750539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.750770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.750799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.751222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.751263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.751662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.751692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.752093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.752122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.752394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.752426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.752684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.752715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.753101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.753131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.753529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.753560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.753945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.753974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.754390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.754419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.754795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.754824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.755238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.755269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.755501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.755532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.755810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.755840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.756270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.756301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.756685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.756715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.757102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.757133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.757506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.757536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.757946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.757977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.758376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.758406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.758815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.758845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.759254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.759285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-07-15 13:15:19.759737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-07-15 13:15:19.759767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.760209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.760261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.760679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.760709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.760942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.760971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.761369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.761401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.761814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.761844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.762258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.762289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.762719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.762750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.763155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.763185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.763456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.763486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.763728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.763757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.764126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.764161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.764582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.764614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.765007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.765038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.765315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.765345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.765742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.765772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.766186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.766216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.766625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.767118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.767149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.767524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.767555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.767799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.767830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.768222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.768266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.768650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.768679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.769087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.769118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.769506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.769537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.769817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.769848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.770135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.770165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.770453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.770486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.770867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.770898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.771285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.771314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.771578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.771607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.771887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.771917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.772045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.772076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.772338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.772369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-07-15 13:15:19.772780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-07-15 13:15:19.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.773219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.773262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.773653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.773684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.774076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.774107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.774519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.774550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.774954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.774984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.775355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.775385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.775794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.775823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.776101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.776131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.776531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.776561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.776840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.776873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.777266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.777296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.777714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.777744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.777865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.777894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.778137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.778167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.778551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.778581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.778952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.778983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.779202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.779250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.779672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.779700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.780098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.780127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.780525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.780555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.780967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.780997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.781401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.781432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.781824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.781854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.782253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.782285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.782552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.782584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.783029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.783059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.783449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.783480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.783701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.783731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.783862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.783890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.784358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.784389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.784625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.784655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.784911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.784943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.785345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.785376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.785799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.785828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.786226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.786265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.786679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.786975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.787006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.787228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.787270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.787654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.787903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-07-15 13:15:19.787933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-07-15 13:15:19.788359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.788390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.788817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.788847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.789086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.789115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.789324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.789358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.789729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.789758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.790173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.790202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.790582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.790613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.790998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.791027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.791439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.791470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.791868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.792124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.792153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.792633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.792664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.793064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.793094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.793497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.793528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.793928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.793958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.794286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.794316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.794733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.794768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.795172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.795201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.795516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.795549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.795970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.795999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.796405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.796436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.796548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.796575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.796729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.796761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.797133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.797163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.797564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.797594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.798012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.798042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.798315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.798345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.798750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.798780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.799027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.799057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.799421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.799451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.799856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.799885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.800008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.800034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.800402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.800432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.800843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.800872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.801267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.801297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.801703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.801733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.802016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.802045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.802454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.802485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.802754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.802782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.803026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.803056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-07-15 13:15:19.803471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-07-15 13:15:19.803502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.803910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.803939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.804328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.804358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.804741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.804771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.805179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.805210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.805647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.805677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.806068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.806097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.806338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.806369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.806773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.806802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.807027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.807058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.807448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.807477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.807718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.807748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.807986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.808015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.808363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.808394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.808770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.808799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.809188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.809217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.809581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.809621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.810029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.810059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.810305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.810722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.810753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.811151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.811180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.811641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.811671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.812069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.812099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.812507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.812537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.812945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.812974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.813349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.813379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.813754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.813783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.814171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.814201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.814608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.814639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.814915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.814946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.815340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.815370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.815779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.815808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.815923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.815951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.816349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.816380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.816837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.816866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.817263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.817294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.817726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.817755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.818157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.818185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.818575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.818605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.818871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.818900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.819311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.819342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-07-15 13:15:19.819752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-07-15 13:15:19.819782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.820067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.820096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.820374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.820407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.820789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.820819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.821177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.821206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.821463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.821493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.821884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.821913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.822362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.822392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.822659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.822691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.823049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.823080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.823519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.823550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.823917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.823946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.824351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.824382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.824756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.824787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.825138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.825168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.825580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.825617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.826007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.826036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.826272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.826304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.826698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.826728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.827168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.827198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.827614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.827645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.828034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.828063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.828310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.828341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.828608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.828640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.829036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.829067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.829414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.829444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.829720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.829749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.830161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.830191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.830519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.830550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.830945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.830976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.831251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.831281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.831657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.831687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.832107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.832137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.832486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.832518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.832902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.832933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.833333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-07-15 13:15:19.833364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-07-15 13:15:19.833756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.833785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.834147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.834177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.834576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.834607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.835012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.835041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.835307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.835338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.835699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.835728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.836114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.836145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.836600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.836631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.836825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.836855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.837116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.837146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.837522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.837553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.837949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.837980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.838347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.838378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.838622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.838651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.838930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.838959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.839274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.839306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.839710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.839739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.840098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.840128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.840572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.840603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.841005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.841041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.841195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.841240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.841667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.841697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.842084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.842115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.842489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.842519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.842911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.842940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.843227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.843268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.843687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.843716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.843958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.843987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.844365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.844396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.844816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.844845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.845245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.845275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.845701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.845730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.846131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.846160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.846420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.846451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.846677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.846706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.846957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.846986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.847359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.847391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.847801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.847830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.848227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.848269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.848677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.849112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-07-15 13:15:19.849142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-07-15 13:15:19.849549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.849580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.849969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.849999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.850344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.850375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.850778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.850808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.851200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.851241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.851669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.851701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.851977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.852008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.852398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.852429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.852644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.852675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.853055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.853309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.853339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.853769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.854183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.854213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-07-15 13:15:19.854536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-07-15 13:15:19.854566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.854973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.855005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.855393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.855426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.855915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.855944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.856332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.856363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.856773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.856808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.857219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.857264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.857544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.857571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.857807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.858215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.858265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.858652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.858961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.858993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.859389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.859420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.859821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.859852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.860259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.860292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.860563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.860592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.860989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.861020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.861133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.861162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.861467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.861498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.861859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.861889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.862291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.862322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.862713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.862744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.863144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.863175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.863581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.863612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.863993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.864023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.864401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.864433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.864862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.864892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.865285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.865316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.865711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.865742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.866168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.866199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.866619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.866650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.867045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.867076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.867534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.867566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.867796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.867826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.868193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.868223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.868639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.868671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.869058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.869089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.869322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.869354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.869626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.869659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.869944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.870398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.360 [2024-07-15 13:15:19.870430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.360 qpair failed and we were unable to recover it. 00:29:58.360 [2024-07-15 13:15:19.870868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.870897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.871145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.871175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.871460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.871493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.871779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.871810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.872211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.872257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.872662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.872692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.873023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.873053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.873442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.873473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.873879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.873909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.874310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.874342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.874740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.874771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.875155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.875187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.875609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.875642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.876048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.876079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.876489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.876520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.876963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.876993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.877397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.877430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.877709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.877741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.877986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.878017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.878409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.878441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.878841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.878872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.879245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.879277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.879529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.879559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.879947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.879977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.880211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.880262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.880534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.880564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.880956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.880987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.881371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.881402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.881820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.881849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.882254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.882285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.882549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.882579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.882968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.883004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.883415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.883445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.883847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.883877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.884271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.884302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.884574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.884605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.885003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.885032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.885301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.885334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.885619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.885651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.886074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.886104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.886486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.886517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.886931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.886960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.887320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.887350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.887786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.887815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.888058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.888087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.888450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.888481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.888929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.888958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.889069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.889096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.889456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.889488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.889896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.889925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.890302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.361 [2024-07-15 13:15:19.890332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.361 qpair failed and we were unable to recover it. 00:29:58.361 [2024-07-15 13:15:19.890695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.890724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.891128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.891158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.891565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.891595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.892013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.892043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.892308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.892339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.892765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.892794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.893188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.893218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.893629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.893662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.894042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.894073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.894404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.894817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.894848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.895080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.895111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.895327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.895356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.895773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.895802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.896082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.896112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.896351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.896382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.896789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.896819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.897221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.897263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.897646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.897675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.897895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.897925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.898172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.898208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.898638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.898669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.899078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.899109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.899515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.899546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.899944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.899973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.900402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.900433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.900696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.900725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.901136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.901165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.901612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.901643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.901923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.901953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.902358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.902390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.902647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.902678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.903076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.903107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.903493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.903524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.903933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.903963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.904226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.904268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.904677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.904708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.904940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.904971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.905343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.905374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.905766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.905797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.906187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.906217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.906627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.906658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.907077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.907107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.907515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.907545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.907871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.907901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.908207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.908245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.908688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.908718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.909114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.909145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.909534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.909566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.909932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.909961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.910204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.910247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.910711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.910741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.911102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.911131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.911398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.362 [2024-07-15 13:15:19.911429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.362 qpair failed and we were unable to recover it. 00:29:58.362 [2024-07-15 13:15:19.911854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.911883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.912044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.912077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.912438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.912468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.912701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.912729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.913121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.913150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.913551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.913582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.914022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.914058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.914460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.914490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.914896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.914925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.915188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.915220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.915664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.915695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.915948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.915978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.916378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.916409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.916816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.916847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.917132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.917162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.917280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.917308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.917748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.917777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.918170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.918200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.918589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.918621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.918992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.919022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.919427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.919458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.919860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.919890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.920112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.920142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.920439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.920469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.920868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.920898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.921292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.921322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.921723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.921753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.922129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.922158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.922566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.922598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.922985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.923015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.923400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.923431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.923835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.923866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.924276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.924308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.924582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.924613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.924847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.924878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.925116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.925144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.925286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.925319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.925733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.925763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.926154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.926183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.926451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.926482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.926773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.926802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.927031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.927060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.363 [2024-07-15 13:15:19.927466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.363 [2024-07-15 13:15:19.927495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.363 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.927915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.927945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.928221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.928265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.928672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.928703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.929087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.929341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.929374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.929786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.929816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.930039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.930068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.930495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.930526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.930918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.930947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.931240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.931273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.931638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.931667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.932048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.932077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.932479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.932508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.932910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.932940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.933324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.933356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.933573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.933604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.933832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.933861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.934273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.934304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.934664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.934694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.935091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.935121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.935500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.935532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.935770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.935799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.936189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.936219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.936630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.936661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.937024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.937054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.937503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.937535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.937922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.937952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.938342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.938373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.938818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.938849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.939255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.939286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.939672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.939703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.940090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.940510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.940542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.940911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.940943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.941197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.941228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.941523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.941555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.941950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.941981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.942381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.942413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.942890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.942919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.943306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.943337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.943617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.943647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.944013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.944043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.944290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.944321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.944700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.944736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.945131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.945161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.945572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.945606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.945824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.945853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.946262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.946295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.946671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.946701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.947098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.947131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.947547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.947577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.947961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.364 [2024-07-15 13:15:19.947990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.364 qpair failed and we were unable to recover it. 00:29:58.364 [2024-07-15 13:15:19.948269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.948302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.948697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.948727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.949005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.949036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.949429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.949460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.949860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.949890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.950303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.950335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.950538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.950569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.950948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.950978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.951381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.951411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.951811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.951840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.952117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.952150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.952442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.952474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.952890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.952921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.953157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.953186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.953573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.953605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.953827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.953860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.954249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.954280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.954649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.954680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.955068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.955099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.955376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.955409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.955662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.955692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.956072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.956103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.956488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.956519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.956910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.956941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.957219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.957273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.957673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.957704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.958101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.958131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.958516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.958548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.958987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.959019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.959395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.959427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.959838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.959869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.960127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.960180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.960599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.960632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.961043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.961074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.961498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.961528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.961955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.961985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.962384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.962416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.962785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.962815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.963222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.963264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.963693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.963724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.964032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.964061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.964473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.964506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.964893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.964923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.965314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.965345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.965602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.965634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.966046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.966077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.966523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.966554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.967034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.967064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.967466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.967498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.967899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.967928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.968321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.968352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.968747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.968778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.969034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.969066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.969297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.969329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.969572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.969605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.969925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.969955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.970209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.970251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.970612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.970645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.971059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.971092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.971347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.971740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.971771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.972162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.972193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.972453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.972486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.972856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.972887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.973269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.973303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.973632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.973896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.973928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.365 [2024-07-15 13:15:19.974331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.365 [2024-07-15 13:15:19.974363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.365 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.974735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.974765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.975170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.975200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.975589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.975622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.975893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.975929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.976328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.976358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.976788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.976818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.977207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.977250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.977529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.977562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.977972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.978001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.978252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.978283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.978687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.978720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.979077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.979107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.979517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.979548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.979830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.979860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.980250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.980281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.980526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.980557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.980798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.980828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.981092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.981122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.981520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.981551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.981945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.981974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.982389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.982420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.982794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.982824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.983222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.983263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.983673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.983701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.984109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.984138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.984521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.984552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.984944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.984974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.985352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.985381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.985611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.986044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.986073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.986377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.986408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.986820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.986850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.987255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.987286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.987726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.987754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.988032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.988061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.988474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.988505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.988884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.988913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.989137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.989167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.989534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.989565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.989952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.989981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.990217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.990258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.990669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.990697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.991093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.991122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.991415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.991451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.991871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.991900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.992310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.992341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.992746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.992776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.993166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.993197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.993477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.993508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.993917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.993947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.994322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.994353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.994605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.994634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.995045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.995073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.995498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.995528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.995926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.995956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.996322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.996354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.996786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.996815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.997225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.997277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.997552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.997583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.997942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.997971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.998396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.998426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.998828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.998858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.999250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.999280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.999675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:19.999704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:19.999994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.366 [2024-07-15 13:15:20.000023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.366 qpair failed and we were unable to recover it. 00:29:58.366 [2024-07-15 13:15:20.000455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.000486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.000773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.000802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.001199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.001510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.001543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.001793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.001824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.002264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.002296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.002404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.002430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.002733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.002763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.003177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.003208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.004016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.004047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.004210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.004257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.004699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.004731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.005036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.005065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.005375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.005406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.005815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.005845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.006298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.006597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.006629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.006906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.006937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.007331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.007368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.007798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.007827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.008116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.008146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.008530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.008560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.008668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.008697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.009081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.009111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.009508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.009539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.009807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.009839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.010252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.010284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.010692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.010720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.011057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.011089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.011335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.011366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.011755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.011786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.012016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.012045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.012452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.012484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.012954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.012983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.013400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.013432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.013832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.013862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.014105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.014134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.014547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.014578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.014986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.015016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.015410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.015441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.015854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.015884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.016162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.016192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.016424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.016454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.016747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.016778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.017054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.017084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.017495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.017526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.017758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.017788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.018090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.018316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.018348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.018623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.018656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.019099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.019131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.019375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.019406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.019824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.019853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.020170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.020200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.020616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.020647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:58.367 [2024-07-15 13:15:20.021038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.021068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.367 [2024-07-15 13:15:20.021452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.021484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.021644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.367 [2024-07-15 13:15:20.021682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.367 [2024-07-15 13:15:20.021943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.021973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.022349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.022380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.022799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.022829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.023243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.023274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.023684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.023716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.024073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.024102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.024483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.024514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.024875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.024905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.025299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.025331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.025727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.025757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.026150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.026180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.026608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.026639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.026929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.026961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.027250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.027281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.027715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.027745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.028152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.028182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.028494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.028527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.028916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.028946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.367 [2024-07-15 13:15:20.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.367 [2024-07-15 13:15:20.029355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.367 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.029769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.029799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.030188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.030217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.030640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.030671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.031078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.031109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.031426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.031459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.031861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.031892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.032285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.032322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.032719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.032751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.033109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.033139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.033454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.033485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.033761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.033791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.034189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.034220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.034487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.034518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.034662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.034691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.035029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.035311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.035345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.035572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.035604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.035841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.035874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.036069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.036102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.036305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.036337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.036633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.036664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.036797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.036828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.037058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.037088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.037442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.037474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.037960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.037991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.038296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.038327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.038741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.038773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.039090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.039119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.039476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.039509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.039792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.040299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.040331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.040579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.040609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.041009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.041039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.041512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.041544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.041948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.041979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.042394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.042427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.042862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.042892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.043147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.043179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.043366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.043399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.043657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.043689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.044092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.044124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.044489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.044519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.045016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.045049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.045426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.045456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.045874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.045904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.046302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.046334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.046581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.046620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.047029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.047059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.047444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.047475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.047869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.047899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.048368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.048398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.048785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.048815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.049263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.049294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.049565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.049595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.049886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.049915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.050345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.050376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.050761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.050792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.051036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.051067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.051348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.051782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.051811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.052270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.052303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.052536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.052567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.053031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.053061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.053443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.053475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.053854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.053884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.054134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.054165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.054402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.054435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.054681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.368 [2024-07-15 13:15:20.055132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.368 [2024-07-15 13:15:20.055162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.368 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.055544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.055575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.055803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.055834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.056253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.056284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.056519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.056549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.056957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.056987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.057368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.057400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.057677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.057706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.058110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.058140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.058547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.058579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.058957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.058988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.059346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.059375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.059601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.059632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.060012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.060043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.060431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.060462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.060744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.060776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.061038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.061073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.061448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.061480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.061864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.061900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.062306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.062337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.062741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.062773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.063214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.063255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.063466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.063496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.063896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.063926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.369 [2024-07-15 13:15:20.064339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.064373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.369 [2024-07-15 13:15:20.064759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.064790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.369 [2024-07-15 13:15:20.065129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.065161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.369 [2024-07-15 13:15:20.065572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.065605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.066007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.066037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.066439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.066470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.066705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.066736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.067151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.067181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.067553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.067584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.067967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.067998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.068228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.068271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.068649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.068679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.068915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.068945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.069216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.069272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.069685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.069714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.070118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.070149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.070518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.070551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.070747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.070777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.071179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.071210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.071630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.071661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.072069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.072099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.072344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.072375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.072761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.072791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.073207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.073247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.073608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.073637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.074024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.074054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.074496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.074528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.074824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.074854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.075264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.075727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.075756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.076147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.076176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.076598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.076629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.076761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.077200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.077249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.077543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.077573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.077975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.078004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.078403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.078436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.078817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.078847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.079217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.079266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.079679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.079710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.080007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.080037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.080441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.080472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.080738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.080767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.081225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.081265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.081657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.081963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.081995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.082376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.082407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.082662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.082692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.083064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.083093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.083511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.083541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.083791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.083821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.369 [2024-07-15 13:15:20.084250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.369 [2024-07-15 13:15:20.084281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.369 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.084722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.084751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.085144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.085174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.085545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.085577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.085855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.085887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.086174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.086205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 Malloc0 00:29:58.370 [2024-07-15 13:15:20.086666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.086697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.087102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.087132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.370 [2024-07-15 13:15:20.087513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.087546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.370 [2024-07-15 13:15:20.087766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.370 [2024-07-15 13:15:20.088115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.088146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.370 [2024-07-15 13:15:20.088535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.088565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.088969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.088998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.089367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.089397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.089805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.089834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.090092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.090123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.090506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.090537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.090939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.090968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.091362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.091393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.091806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.091837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.092247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.092278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.092550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.092580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.092967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.092998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.093391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.093422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.093762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.370 [2024-07-15 13:15:20.093856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.093885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.094263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.094293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.094542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.094572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.094971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.095001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.095284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.095316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.095723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.095753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.096140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.096171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.096436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.096466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.096868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.096898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.097311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.097769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.097798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.098190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.098219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.098657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.098688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.099086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.099116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.099467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.099891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.099921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.100202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.100253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.100670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.100699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.101097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.101128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.101561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.101590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.101943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.101973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.102384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.102415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.102809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.103241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.370 [2024-07-15 13:15:20.103273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.370 [2024-07-15 13:15:20.103699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.103729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.370 [2024-07-15 13:15:20.103978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.104008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.104127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.104156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.104307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.104339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.104765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.104794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.105065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.105100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.105493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.105526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.105991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.106021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.106421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.106452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.106690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.106719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.107100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.107130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.107516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.107546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.107978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.108009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.108391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.108423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.108835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.108864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.109277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.109308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.109539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.109570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.109852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.109880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.110015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.110043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.110447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.110477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.110882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.110913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.111149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.111179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.111600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.111631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.112039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.112071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.112447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.112479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.112885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.112914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.370 [2024-07-15 13:15:20.113313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.370 [2024-07-15 13:15:20.113344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.370 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.113736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.113765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.114012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.114042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.114331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.114362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.114793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.114822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.371 [2024-07-15 13:15:20.115222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.115265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.371 [2024-07-15 13:15:20.115708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.115738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.371 [2024-07-15 13:15:20.116013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.116043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.116332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.116364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.116779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.116810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.117209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.117248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.117646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.117675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.117956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.117985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.118380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.118410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.118827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.118856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.119277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.119307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.119440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.119468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.119892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.119922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.120323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.120355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.120768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.120798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.121006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.121036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.121418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.121448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.121744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.122180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.122209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.122456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.122486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.122870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.122900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.123269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.123299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.123575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.123604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.123993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.124303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.124334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.124763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.124792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.125198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.125227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.125506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.125536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.125783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.125814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.126251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.126282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.126507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.126543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.371 [2024-07-15 13:15:20.127004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.127036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.371 [2024-07-15 13:15:20.127442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.127473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.371 [2024-07-15 13:15:20.127882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.127912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.128177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.128208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.128609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.128640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.129001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.129031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.129449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.129480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.129887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.129917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.130314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.130346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.130489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.130520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.130906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.130936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.131228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.131269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.131514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.131546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.131927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.131956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.132128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.132158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.132542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.132573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.132966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.132995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.133434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.133465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.133879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.371 [2024-07-15 13:15:20.133909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe150000b90 with addr=10.0.0.2, port=4420 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.134149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.371 [2024-07-15 13:15:20.145044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.371 [2024-07-15 13:15:20.145246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.371 [2024-07-15 13:15:20.145303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.371 [2024-07-15 13:15:20.145329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.371 [2024-07-15 13:15:20.145351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.371 [2024-07-15 13:15:20.145407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.371 13:15:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 888359 00:29:58.371 [2024-07-15 13:15:20.154838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.371 [2024-07-15 13:15:20.154963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.371 [2024-07-15 13:15:20.154999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.371 [2024-07-15 13:15:20.155015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.371 [2024-07-15 13:15:20.155030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.371 [2024-07-15 13:15:20.155066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.371 [2024-07-15 13:15:20.164806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.371 [2024-07-15 13:15:20.164890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.371 [2024-07-15 13:15:20.164916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.371 [2024-07-15 13:15:20.164929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.371 [2024-07-15 13:15:20.164939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.371 [2024-07-15 13:15:20.164962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.371 qpair failed and we were unable to recover it. 00:29:58.634 [2024-07-15 13:15:20.174800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.634 [2024-07-15 13:15:20.174889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.634 [2024-07-15 13:15:20.174912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.634 [2024-07-15 13:15:20.174921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.634 [2024-07-15 13:15:20.174928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.634 [2024-07-15 13:15:20.174946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.634 qpair failed and we were unable to recover it. 00:29:58.634 [2024-07-15 13:15:20.184868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.634 [2024-07-15 13:15:20.184973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.634 [2024-07-15 13:15:20.184995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.634 [2024-07-15 13:15:20.185005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.634 [2024-07-15 13:15:20.185013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.634 [2024-07-15 13:15:20.185031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.634 qpair failed and we were unable to recover it. 00:29:58.634 [2024-07-15 13:15:20.194706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.634 [2024-07-15 13:15:20.194779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.634 [2024-07-15 13:15:20.194809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.634 [2024-07-15 13:15:20.194817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.634 [2024-07-15 13:15:20.194825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.634 [2024-07-15 13:15:20.194843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.634 qpair failed and we were unable to recover it. 00:29:58.634 [2024-07-15 13:15:20.204782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.634 [2024-07-15 13:15:20.204859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.634 [2024-07-15 13:15:20.204881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.634 [2024-07-15 13:15:20.204889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.634 [2024-07-15 13:15:20.204896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.204913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.214821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.214915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.214936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.214944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.214952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.214970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.224966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.225067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.225089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.225098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.225105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.225123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.234875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.234946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.234968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.234976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.234990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.235008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.244908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.245011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.245034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.245042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.245049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.245066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.254927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.255005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.255026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.255034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.255041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.255060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.265111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.265208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.265236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.265245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.265252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.265270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.275016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.275092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.275112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.275120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.275128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.275147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.285100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.285176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.285197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.285205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.285212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.285241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.295049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.295122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.295143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.295152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.295158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.295177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.305242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.305400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.305422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.305432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.305439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.305456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.315133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.315222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.315252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.315261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.315267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.315286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.325173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.325255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.325277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.325285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.325298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.325317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.335199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.335314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.335338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.335346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.335354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.335372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.345270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.635 [2024-07-15 13:15:20.345404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.635 [2024-07-15 13:15:20.345426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.635 [2024-07-15 13:15:20.345434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.635 [2024-07-15 13:15:20.345441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.635 [2024-07-15 13:15:20.345459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.635 qpair failed and we were unable to recover it. 00:29:58.635 [2024-07-15 13:15:20.355270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.355358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.355380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.355388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.355395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.355413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.365302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.365374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.365395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.365404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.365411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.365428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.375309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.375435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.375460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.375468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.375475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.375494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.385488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.385588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.385610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.385619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.385626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.385643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.395323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.395400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.395421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.395430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.395437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.395453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.405518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.405610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.405630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.405639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.405646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.405665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.415443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.415530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.415550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.415566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.415573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.415590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.425766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.425874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.425895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.425904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.425911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.425928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.435622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.435697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.435718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.435726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.435732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.435750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.445636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.445753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.445776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.445784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.445791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.445809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.636 [2024-07-15 13:15:20.455634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.636 [2024-07-15 13:15:20.455710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.636 [2024-07-15 13:15:20.455731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.636 [2024-07-15 13:15:20.455739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.636 [2024-07-15 13:15:20.455746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.636 [2024-07-15 13:15:20.455763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.636 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.465705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.465817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.465839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.899 [2024-07-15 13:15:20.465847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.899 [2024-07-15 13:15:20.465855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.899 [2024-07-15 13:15:20.465873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.899 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.475639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.475713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.475735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.899 [2024-07-15 13:15:20.475743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.899 [2024-07-15 13:15:20.475750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.899 [2024-07-15 13:15:20.475768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.899 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.485669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.485755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.485776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.899 [2024-07-15 13:15:20.485785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.899 [2024-07-15 13:15:20.485792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.899 [2024-07-15 13:15:20.485810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.899 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.495741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.495817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.495838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.899 [2024-07-15 13:15:20.495846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.899 [2024-07-15 13:15:20.495854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.899 [2024-07-15 13:15:20.495870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.899 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.505816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.505913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.505956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.899 [2024-07-15 13:15:20.505967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.899 [2024-07-15 13:15:20.505975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.899 [2024-07-15 13:15:20.505998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.899 qpair failed and we were unable to recover it. 00:29:58.899 [2024-07-15 13:15:20.515821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.899 [2024-07-15 13:15:20.515896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.899 [2024-07-15 13:15:20.515931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.515942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.515950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.515972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.525747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.525828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.525855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.525865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.525872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.525893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.535822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.535911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.535933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.535942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.535949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.535967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.545821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.545911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.545933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.545941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.545948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.545973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.555881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.555960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.555996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.556006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.556014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.556037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.565911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.565996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.566033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.566042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.566050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.566073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.575854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.575971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.576006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.576019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.576027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.576050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.586052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.586146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.586172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.586182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.586189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.586208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.595988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.596067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.596095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.596103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.596109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.596128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.606028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.606114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.606135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.606143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.606152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.606170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.615959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.616036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.616059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.616067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.616077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.616103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.626161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.626256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.626279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.626289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.626297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.626316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.636100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.636170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.636191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.636199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.636206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.636241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.646110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.646205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.646226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.646243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.646250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.900 [2024-07-15 13:15:20.646269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.900 qpair failed and we were unable to recover it. 00:29:58.900 [2024-07-15 13:15:20.656245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.900 [2024-07-15 13:15:20.656321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.900 [2024-07-15 13:15:20.656342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.900 [2024-07-15 13:15:20.656350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.900 [2024-07-15 13:15:20.656357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.656376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.666292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.666443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.666466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.666474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.666481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.666498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.676240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.676312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.676332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.676340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.676347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.676366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.686290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.686372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.686392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.686401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.686408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.686425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.696293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.696407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.696427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.696436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.696443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.696460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.706405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.706510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.706531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.706540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.706547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.706565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:58.901 [2024-07-15 13:15:20.716350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.901 [2024-07-15 13:15:20.716446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.901 [2024-07-15 13:15:20.716467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.901 [2024-07-15 13:15:20.716475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.901 [2024-07-15 13:15:20.716482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:58.901 [2024-07-15 13:15:20.716501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.901 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.726391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.726515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.726537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.726546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.726559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.726576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.736444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.736520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.736542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.736550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.736557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.736576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.746600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.746693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.746714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.746722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.746729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.746748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.756504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.756632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.756653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.756661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.756668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.756685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.766503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.766576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.766597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.766605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.766613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.766632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.776444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.776520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.776540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.776548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.776556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.776574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.786655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.786760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.786781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.786789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.786797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.786814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.796570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.796654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.796674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.796683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.796691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.796708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.806665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.163 [2024-07-15 13:15:20.806760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.163 [2024-07-15 13:15:20.806782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.163 [2024-07-15 13:15:20.806790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.163 [2024-07-15 13:15:20.806797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.163 [2024-07-15 13:15:20.806815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.163 qpair failed and we were unable to recover it. 00:29:59.163 [2024-07-15 13:15:20.816697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.816794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.816816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.816830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.816837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.816855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.826786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.826895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.826917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.826925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.826932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.826949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.836727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.836798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.836818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.836827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.836833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.836851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.846689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.846794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.846815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.846823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.846830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.846847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.856770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.856899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.856920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.856928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.856935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.856952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.866827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.866924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.866958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.866968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.866976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.867000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.876777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.876860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.876889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.876899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.876906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.876928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.886861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.887011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.887047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.887058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.887065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.887087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.896888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.896964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.896988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.896996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.897003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.897022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.906879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.906978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.907000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.907022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.907029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.907049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.916942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.917019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.164 [2024-07-15 13:15:20.917040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.164 [2024-07-15 13:15:20.917048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.164 [2024-07-15 13:15:20.917055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.164 [2024-07-15 13:15:20.917073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.164 qpair failed and we were unable to recover it. 00:29:59.164 [2024-07-15 13:15:20.926847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.164 [2024-07-15 13:15:20.926934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.926956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.926964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.926971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.926995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.937014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.165 [2024-07-15 13:15:20.937091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.937112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.937120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.937128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.937146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.946920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.165 [2024-07-15 13:15:20.947010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.947031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.947040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.947048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.947065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.957060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.165 [2024-07-15 13:15:20.957145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.957167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.957175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.957183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.957203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.967101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.165 [2024-07-15 13:15:20.967183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.967205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.967213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.967220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.967246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.977023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.165 [2024-07-15 13:15:20.977098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.165 [2024-07-15 13:15:20.977119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.165 [2024-07-15 13:15:20.977127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.165 [2024-07-15 13:15:20.977135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.165 [2024-07-15 13:15:20.977153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.165 qpair failed and we were unable to recover it. 00:29:59.165 [2024-07-15 13:15:20.987151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:20.987257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:20.987279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:20.987288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:20.987295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:20.987313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:20.997160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:20.997238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:20.997266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:20.997274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:20.997283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:20.997301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.007194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.007274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.007295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.007303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.007310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.007327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.017180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.017267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.017288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.017298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.017306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.017323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.027163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.027266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.027289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.027298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.027305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.027322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.037284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.037365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.037385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.037393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.037400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.037423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.047348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.047438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.047461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.047469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.047476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.047493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.057265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.057339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.057362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.057370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.057377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.057396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.067384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.067482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.067502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.067512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.067519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.067537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.077414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.077492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.077512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.077519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.077528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.077545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.087425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.087496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.087522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.087530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.087539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.087556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.097358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.097432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.097452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.097460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.097467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.097486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.107521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.107611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.107631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.107639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.107647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.107664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.117540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.117615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.117635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.117644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.117652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.117670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.127575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.127653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.127673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.127681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.127695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.127712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.137596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.137667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.137688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.137696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.137703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.137722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.147562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.147643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.147663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.147671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.147678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.147697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.157636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.157710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.157731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.426 [2024-07-15 13:15:21.157739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.426 [2024-07-15 13:15:21.157748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.426 [2024-07-15 13:15:21.157765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.426 qpair failed and we were unable to recover it. 00:29:59.426 [2024-07-15 13:15:21.167594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.426 [2024-07-15 13:15:21.167671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.426 [2024-07-15 13:15:21.167691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.167700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.167708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.167725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.177707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.177796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.177817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.177827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.177834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.177851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.187736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.187878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.187899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.187907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.187915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.187932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.197659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.197736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.197756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.197763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.197772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.197790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.207828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.207904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.207924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.207932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.207938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.207956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.217841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.217917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.217937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.217950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.217958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.217976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.227871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.227958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.227993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.228002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.228010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.228033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.237875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.237952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.237988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.237997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.238005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.238028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.427 [2024-07-15 13:15:21.247880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.427 [2024-07-15 13:15:21.247974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.427 [2024-07-15 13:15:21.248007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.427 [2024-07-15 13:15:21.248018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.427 [2024-07-15 13:15:21.248026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.427 [2024-07-15 13:15:21.248049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.427 qpair failed and we were unable to recover it. 00:29:59.688 [2024-07-15 13:15:21.257949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.688 [2024-07-15 13:15:21.258024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.688 [2024-07-15 13:15:21.258048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.688 [2024-07-15 13:15:21.258057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.688 [2024-07-15 13:15:21.258067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.688 [2024-07-15 13:15:21.258086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.688 qpair failed and we were unable to recover it. 00:29:59.688 [2024-07-15 13:15:21.267985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.688 [2024-07-15 13:15:21.268062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.688 [2024-07-15 13:15:21.268083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.688 [2024-07-15 13:15:21.268092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.688 [2024-07-15 13:15:21.268099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.688 [2024-07-15 13:15:21.268118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.688 qpair failed and we were unable to recover it. 00:29:59.688 [2024-07-15 13:15:21.277995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.688 [2024-07-15 13:15:21.278128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.688 [2024-07-15 13:15:21.278150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.688 [2024-07-15 13:15:21.278158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.688 [2024-07-15 13:15:21.278165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.688 [2024-07-15 13:15:21.278182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.688 qpair failed and we were unable to recover it. 00:29:59.688 [2024-07-15 13:15:21.288030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.688 [2024-07-15 13:15:21.288100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.688 [2024-07-15 13:15:21.288120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.688 [2024-07-15 13:15:21.288128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.688 [2024-07-15 13:15:21.288135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.288154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.297964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.298040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.298062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.298071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.689 [2024-07-15 13:15:21.298079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.298105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.308091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.308181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.308203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.308218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.689 [2024-07-15 13:15:21.308224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.308250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.318129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.318204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.318225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.318241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.689 [2024-07-15 13:15:21.318248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.318266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.328176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.328263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.328284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.328293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.689 [2024-07-15 13:15:21.328300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.328318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.338268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.338362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.338383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.338392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.689 [2024-07-15 13:15:21.338399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.689 [2024-07-15 13:15:21.338417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.689 qpair failed and we were unable to recover it. 00:29:59.689 [2024-07-15 13:15:21.348212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.689 [2024-07-15 13:15:21.348309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.689 [2024-07-15 13:15:21.348330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.689 [2024-07-15 13:15:21.348338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.348346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.348364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.690 [2024-07-15 13:15:21.358170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.690 [2024-07-15 13:15:21.358246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.690 [2024-07-15 13:15:21.358269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.690 [2024-07-15 13:15:21.358278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.358287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.358306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.690 [2024-07-15 13:15:21.368289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.690 [2024-07-15 13:15:21.368368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.690 [2024-07-15 13:15:21.368394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.690 [2024-07-15 13:15:21.368404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.368411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.368430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.690 [2024-07-15 13:15:21.378314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.690 [2024-07-15 13:15:21.378389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.690 [2024-07-15 13:15:21.378411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.690 [2024-07-15 13:15:21.378419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.378427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.378446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.690 [2024-07-15 13:15:21.388270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.690 [2024-07-15 13:15:21.388356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.690 [2024-07-15 13:15:21.388376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.690 [2024-07-15 13:15:21.388384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.388393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.388411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.690 [2024-07-15 13:15:21.398399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.690 [2024-07-15 13:15:21.398473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.690 [2024-07-15 13:15:21.398499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.690 [2024-07-15 13:15:21.398507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.690 [2024-07-15 13:15:21.398515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.690 [2024-07-15 13:15:21.398533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.690 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.408289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.408361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.408382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.408390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.691 [2024-07-15 13:15:21.408398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.691 [2024-07-15 13:15:21.408417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.691 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.418443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.418521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.418540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.418549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.691 [2024-07-15 13:15:21.418556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.691 [2024-07-15 13:15:21.418574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.691 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.428371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.428495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.428516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.428525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.691 [2024-07-15 13:15:21.428532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.691 [2024-07-15 13:15:21.428549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.691 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.438471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.438543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.438562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.438571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.691 [2024-07-15 13:15:21.438577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.691 [2024-07-15 13:15:21.438601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.691 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.448525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.448594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.448614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.448623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.691 [2024-07-15 13:15:21.448630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.691 [2024-07-15 13:15:21.448648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.691 qpair failed and we were unable to recover it. 00:29:59.691 [2024-07-15 13:15:21.458559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.691 [2024-07-15 13:15:21.458644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.691 [2024-07-15 13:15:21.458664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.691 [2024-07-15 13:15:21.458672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.458679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.458697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.692 [2024-07-15 13:15:21.468553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.692 [2024-07-15 13:15:21.468682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.692 [2024-07-15 13:15:21.468702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.692 [2024-07-15 13:15:21.468711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.468718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.468735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.692 [2024-07-15 13:15:21.478590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.692 [2024-07-15 13:15:21.478660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.692 [2024-07-15 13:15:21.478680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.692 [2024-07-15 13:15:21.478688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.478695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.478714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.692 [2024-07-15 13:15:21.488620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.692 [2024-07-15 13:15:21.488707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.692 [2024-07-15 13:15:21.488733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.692 [2024-07-15 13:15:21.488741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.488748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.488765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.692 [2024-07-15 13:15:21.498637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.692 [2024-07-15 13:15:21.498714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.692 [2024-07-15 13:15:21.498734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.692 [2024-07-15 13:15:21.498743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.498750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.498768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.692 [2024-07-15 13:15:21.508723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.692 [2024-07-15 13:15:21.508825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.692 [2024-07-15 13:15:21.508847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.692 [2024-07-15 13:15:21.508855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.692 [2024-07-15 13:15:21.508862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.692 [2024-07-15 13:15:21.508880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.692 qpair failed and we were unable to recover it. 00:29:59.955 [2024-07-15 13:15:21.518704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.955 [2024-07-15 13:15:21.518776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.955 [2024-07-15 13:15:21.518796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.955 [2024-07-15 13:15:21.518805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.518812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.518830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.528753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.528832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.528852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.528860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.528875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.528893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.538671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.538748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.538773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.538782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.538790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.538809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.548793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.548882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.548903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.548912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.548919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.548938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.558885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.558998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.559019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.559028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.559035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.559053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.568745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.568814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.568835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.568843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.568850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.568869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.578899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.578982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.579004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.579012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.579020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.579037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.588942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.589032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.589053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.589062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.589071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.589089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.598940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.599014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.599035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.599043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.599051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.599070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.608990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.609062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.609082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.609090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.609097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.609115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.619038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.619132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.956 [2024-07-15 13:15:21.619153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.956 [2024-07-15 13:15:21.619161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.956 [2024-07-15 13:15:21.619173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.956 [2024-07-15 13:15:21.619191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.956 qpair failed and we were unable to recover it. 00:29:59.956 [2024-07-15 13:15:21.628944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.956 [2024-07-15 13:15:21.629021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.629042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.629051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.629058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.629084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.639109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.639240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.639263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.639272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.639279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.639297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.649126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.649194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.649214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.649222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.649235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.649254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.659144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.659219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.659250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.659260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.659267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.659286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.669173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.669282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.669306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.669314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.669321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.669338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.679192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.679273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.679302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.679312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.679319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.679341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.689172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.689254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.689276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.689285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.689292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.689312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.699285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.699363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.699384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.699393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.699400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.699417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.709312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.709405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.709426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.709440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.709447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.709464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.719324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.719391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.719412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.719421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.719428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.719445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.729385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.729514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.729535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.729544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.729551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.729569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.739408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.739496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.739517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.739525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.739532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.739549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.749495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.749587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.749608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.749617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.749624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.957 [2024-07-15 13:15:21.749642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.957 qpair failed and we were unable to recover it. 00:29:59.957 [2024-07-15 13:15:21.759486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.957 [2024-07-15 13:15:21.759615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.957 [2024-07-15 13:15:21.759637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.957 [2024-07-15 13:15:21.759645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.957 [2024-07-15 13:15:21.759652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.958 [2024-07-15 13:15:21.759670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.958 qpair failed and we were unable to recover it. 00:29:59.958 [2024-07-15 13:15:21.769517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.958 [2024-07-15 13:15:21.769596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.958 [2024-07-15 13:15:21.769617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.958 [2024-07-15 13:15:21.769625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.958 [2024-07-15 13:15:21.769633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:29:59.958 [2024-07-15 13:15:21.769650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:59.958 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.779542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.779671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.779692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.779701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.779708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.779725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.789545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.789626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.789646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.789655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.789662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.789680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.799632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.799711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.799737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.799746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.799752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.799771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.809619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.809697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.809718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.809726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.809733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.809750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.819644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.819725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.819746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.819754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.819761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.819779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.829674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.829765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.829786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.829795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.829801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.829819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.839619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.839694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.839715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.839723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.839730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.839752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.849728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.849801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.849822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.849830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.849838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.849855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.859759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.859828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.859848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.859857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.859863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.859882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.869787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.869867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.869887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.869895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.869902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.869919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.221 [2024-07-15 13:15:21.879803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.221 [2024-07-15 13:15:21.879870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.221 [2024-07-15 13:15:21.879891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.221 [2024-07-15 13:15:21.879899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.221 [2024-07-15 13:15:21.879906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.221 [2024-07-15 13:15:21.879923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.221 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.889858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.889932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.889974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.889984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.889990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.890008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.899923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.900005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.900039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.900049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.900056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.900078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.909908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.910006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.910040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.910049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.910057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.910080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.919924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.920003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.920026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.920035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.920043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.920061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.929958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.930026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.930048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.930056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.930069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.930088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.939959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.940026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.940048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.940056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.940063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.940081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.949991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.950080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.950100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.950108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.950114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.950132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.960002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.960078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.960098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.960107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.960113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.960131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.970086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.970150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.970169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.970176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.970183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.970200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.980036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.980105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.980124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.980132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.980139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.980155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:21.990095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:21.990163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:21.990180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:21.990188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:21.990195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:21.990211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:22.000129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:22.000202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:22.000220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:22.000228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:22.000240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:22.000256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:22.010143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:22.010214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:22.010235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:22.010244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:22.010250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:22.010266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:22.020134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:22.020195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.222 [2024-07-15 13:15:22.020211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.222 [2024-07-15 13:15:22.020219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.222 [2024-07-15 13:15:22.020235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.222 [2024-07-15 13:15:22.020251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.222 qpair failed and we were unable to recover it. 00:30:00.222 [2024-07-15 13:15:22.030083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.222 [2024-07-15 13:15:22.030171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.223 [2024-07-15 13:15:22.030189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.223 [2024-07-15 13:15:22.030197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.223 [2024-07-15 13:15:22.030204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.223 [2024-07-15 13:15:22.030221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.223 qpair failed and we were unable to recover it. 00:30:00.223 [2024-07-15 13:15:22.040221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.223 [2024-07-15 13:15:22.040349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.223 [2024-07-15 13:15:22.040367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.223 [2024-07-15 13:15:22.040375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.223 [2024-07-15 13:15:22.040381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.223 [2024-07-15 13:15:22.040397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.223 qpair failed and we were unable to recover it. 00:30:00.485 [2024-07-15 13:15:22.050245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.485 [2024-07-15 13:15:22.050311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.485 [2024-07-15 13:15:22.050328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.485 [2024-07-15 13:15:22.050335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.485 [2024-07-15 13:15:22.050342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.485 [2024-07-15 13:15:22.050358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.485 qpair failed and we were unable to recover it. 00:30:00.485 [2024-07-15 13:15:22.060217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.485 [2024-07-15 13:15:22.060279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.485 [2024-07-15 13:15:22.060295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.485 [2024-07-15 13:15:22.060303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.485 [2024-07-15 13:15:22.060309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.485 [2024-07-15 13:15:22.060324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.485 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.070294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.070366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.070382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.070390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.070398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.070413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.080281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.080352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.080367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.080374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.080380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.080395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.090377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.090481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.090497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.090505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.090512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.090526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.100340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.100419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.100434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.100441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.100448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.100463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.110384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.110452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.110467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.110478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.110485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.110501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.120444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.120504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.120519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.120526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.120532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.120547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.130464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.130529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.130544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.130551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.130558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.130573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.140455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.140512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.140527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.140534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.140540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.140554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.150479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.150552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.150567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.150575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.150581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.150596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.160544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.160647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.160662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.160670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.160676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.160691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.170457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.170518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.170532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.170540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.170546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.170561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.180542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.180598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.180612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.180619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.180626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.180640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.190623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.190691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.190706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.190713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.190719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.190734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.200530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.486 [2024-07-15 13:15:22.200595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.486 [2024-07-15 13:15:22.200613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.486 [2024-07-15 13:15:22.200620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.486 [2024-07-15 13:15:22.200627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.486 [2024-07-15 13:15:22.200641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.486 qpair failed and we were unable to recover it. 00:30:00.486 [2024-07-15 13:15:22.210707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.210768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.210782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.210789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.210795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.210809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.220710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.220764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.220778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.220785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.220792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.220806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.230741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.230799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.230813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.230820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.230826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.230840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.240764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.240821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.240835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.240842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.240848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.240867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.250664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.250733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.250748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.250755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.250761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.250776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.260780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.260838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.260852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.260860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.260866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.260880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.270859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.270922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.270936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.270943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.270949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.270964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.280863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.280930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.280954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.280962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.280970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.280989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.290890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.290954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.290982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.290992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.290999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.291018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.487 [2024-07-15 13:15:22.300770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.487 [2024-07-15 13:15:22.300832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.487 [2024-07-15 13:15:22.300848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.487 [2024-07-15 13:15:22.300856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.487 [2024-07-15 13:15:22.300863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.487 [2024-07-15 13:15:22.300884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.487 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.310936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.311004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.311019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.311027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.311033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.311048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.320978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.321036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.321051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.321058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.321065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.321079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.330873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.330945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.330960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.330967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.330973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.330991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.340999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.341057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.341071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.341078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.341084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.341099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.351091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.351176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.351191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.351199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.351205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.351219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.361003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.361064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.361079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.361086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.361092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.361106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.371110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.371167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.371181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.371188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.371194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.371209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.381079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.381139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.381154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.381161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.381167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.381181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.391170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.391238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.391253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.391260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.391266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.391281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.401181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.401244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.401259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.401266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.401273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.401287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.411211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.411277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.411291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.411298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.411305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.411319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.421145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.421203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.750 [2024-07-15 13:15:22.421218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.750 [2024-07-15 13:15:22.421226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.750 [2024-07-15 13:15:22.421244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.750 [2024-07-15 13:15:22.421259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.750 qpair failed and we were unable to recover it. 00:30:00.750 [2024-07-15 13:15:22.431358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.750 [2024-07-15 13:15:22.431430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.431444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.431452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.431458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.431473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.441368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.441424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.441439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.441446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.441453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.441467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.451382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.451446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.451461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.451468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.451474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.451489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.461348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.461404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.461418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.461426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.461433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.461447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.471384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.471463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.471477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.471484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.471492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.471506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.481442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.481501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.481515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.481522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.481529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.481543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.491434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.491496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.491510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.491517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.491523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.491537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.501408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.501463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.501477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.501484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.501491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.501505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.511501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.511584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.511599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.511610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.511616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.511631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.521530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.521593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.521607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.521614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.521621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.521635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.531527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.531596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.531610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.531617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.531624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.531639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.541527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.541593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.751 [2024-07-15 13:15:22.541607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.751 [2024-07-15 13:15:22.541614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.751 [2024-07-15 13:15:22.541620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.751 [2024-07-15 13:15:22.541634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.751 qpair failed and we were unable to recover it. 00:30:00.751 [2024-07-15 13:15:22.551563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.751 [2024-07-15 13:15:22.551623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.752 [2024-07-15 13:15:22.551637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.752 [2024-07-15 13:15:22.551644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.752 [2024-07-15 13:15:22.551651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.752 [2024-07-15 13:15:22.551664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.752 qpair failed and we were unable to recover it. 00:30:00.752 [2024-07-15 13:15:22.561595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.752 [2024-07-15 13:15:22.561653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.752 [2024-07-15 13:15:22.561667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.752 [2024-07-15 13:15:22.561675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.752 [2024-07-15 13:15:22.561681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.752 [2024-07-15 13:15:22.561694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.752 qpair failed and we were unable to recover it. 00:30:00.752 [2024-07-15 13:15:22.571691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.752 [2024-07-15 13:15:22.571773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.752 [2024-07-15 13:15:22.571788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.752 [2024-07-15 13:15:22.571795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.752 [2024-07-15 13:15:22.571802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:00.752 [2024-07-15 13:15:22.571816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.752 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.581669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.581723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.581737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.581744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.581751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.581765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.591768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.591825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.591839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.591846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.591853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.591867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.601756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.601820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.601834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.601845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.601851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.601865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.611764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.611826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.611840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.611847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.611854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.611868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.621737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.621793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.621808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.621815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.621822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.621835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.631760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.631826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.631841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.631848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.631854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.631868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.641848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.641908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.014 [2024-07-15 13:15:22.641922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.014 [2024-07-15 13:15:22.641929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.014 [2024-07-15 13:15:22.641936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.014 [2024-07-15 13:15:22.641950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.014 qpair failed and we were unable to recover it. 00:30:01.014 [2024-07-15 13:15:22.651838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.014 [2024-07-15 13:15:22.651905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.651929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.651938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.651945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.651964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.661861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.661929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.661945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.661952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.661959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.661975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.671927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.672000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.672025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.672034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.672041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.672060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.681957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.682019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.682044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.682053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.682060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.682079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.691954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.692016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.692044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.692053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.692061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.692079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.701962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.702068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.702084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.702092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.702098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.702114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.712039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.712147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.712162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.712169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.712176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.712191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.721975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.722049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.722063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.722071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.722078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.722092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.732048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.732102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.732117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.732124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.732130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.732149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.742083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.742187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.742202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.742210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.742216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.742236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.752149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.752209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.752223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.752236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.752242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.752257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.762130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.762212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.762227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.762239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.762246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.762260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.772158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.772213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.772227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.772240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.772246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.772261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.782066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.782126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.782144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.782151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.782158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.015 [2024-07-15 13:15:22.782172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.015 qpair failed and we were unable to recover it. 00:30:01.015 [2024-07-15 13:15:22.792136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.015 [2024-07-15 13:15:22.792199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.015 [2024-07-15 13:15:22.792215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.015 [2024-07-15 13:15:22.792222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.015 [2024-07-15 13:15:22.792234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.016 [2024-07-15 13:15:22.792250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.016 qpair failed and we were unable to recover it. 00:30:01.016 [2024-07-15 13:15:22.802270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.016 [2024-07-15 13:15:22.802329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.016 [2024-07-15 13:15:22.802343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.016 [2024-07-15 13:15:22.802351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.016 [2024-07-15 13:15:22.802357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.016 [2024-07-15 13:15:22.802372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.016 qpair failed and we were unable to recover it. 00:30:01.016 [2024-07-15 13:15:22.812263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.016 [2024-07-15 13:15:22.812324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.016 [2024-07-15 13:15:22.812339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.016 [2024-07-15 13:15:22.812347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.016 [2024-07-15 13:15:22.812353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.016 [2024-07-15 13:15:22.812368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.016 qpair failed and we were unable to recover it. 00:30:01.016 [2024-07-15 13:15:22.822334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.016 [2024-07-15 13:15:22.822434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.016 [2024-07-15 13:15:22.822449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.016 [2024-07-15 13:15:22.822456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.016 [2024-07-15 13:15:22.822465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.016 [2024-07-15 13:15:22.822480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.016 qpair failed and we were unable to recover it. 00:30:01.016 [2024-07-15 13:15:22.832355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.016 [2024-07-15 13:15:22.832420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.016 [2024-07-15 13:15:22.832434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.016 [2024-07-15 13:15:22.832442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.016 [2024-07-15 13:15:22.832448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.016 [2024-07-15 13:15:22.832463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.016 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.842389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.842457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.842472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.842479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.842486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.842500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.852367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.852421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.852436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.852443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.852450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.852464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.862419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.862473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.862488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.862495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.862502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.862516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.872484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.872550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.872565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.872572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.872578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.872592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.882497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.882574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.882588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.882596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.882603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.882617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.892522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.892615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.892630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.892637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.892643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.892657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.902516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.902605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.902620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.902627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.902634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.902648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.912604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.912666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.912681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.912692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.912701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.912716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.922615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.922672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.922687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.922694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.922700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.922715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.932602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.932658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.932673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.932681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.932687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.932701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.278 [2024-07-15 13:15:22.942626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.278 [2024-07-15 13:15:22.942683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.278 [2024-07-15 13:15:22.942697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.278 [2024-07-15 13:15:22.942704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.278 [2024-07-15 13:15:22.942711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.278 [2024-07-15 13:15:22.942725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.278 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:22.952626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:22.952694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:22.952708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:22.952716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:22.952722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:22.952736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:22.962699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:22.962794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:22.962809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:22.962817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:22.962823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:22.962838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:22.972583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:22.972637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:22.972652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:22.972659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:22.972665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:22.972685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:22.982616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:22.982683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:22.982699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:22.982707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:22.982714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:22.982729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:22.992833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:22.992922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:22.992938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:22.992945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:22.992951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:22.992965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.002814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.002874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.002888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.002899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.002905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.002920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.012802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.012857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.012871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.012878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.012885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.012899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.022845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.022954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.022968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.022976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.022982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.022996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.032917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.032976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.032991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.032998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.033005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.033020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.042923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.043023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.043038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.043046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.043053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.043067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.052804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.052856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.052870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.052878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.052885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.052900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.062955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.063013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.063027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.063034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.063041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.063055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.072903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.072976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.073000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.073010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.073017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.073035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.083042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.083105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.083121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.279 [2024-07-15 13:15:23.083129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.279 [2024-07-15 13:15:23.083135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.279 [2024-07-15 13:15:23.083150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.279 qpair failed and we were unable to recover it. 00:30:01.279 [2024-07-15 13:15:23.093038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.279 [2024-07-15 13:15:23.093095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.279 [2024-07-15 13:15:23.093114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.280 [2024-07-15 13:15:23.093122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.280 [2024-07-15 13:15:23.093130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.280 [2024-07-15 13:15:23.093145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.280 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.103047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.103105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.103120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.103127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.103133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.103148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.113102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.113166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.113180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.113187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.113194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.113208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.123136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.123192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.123206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.123213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.123219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.123238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.133018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.133075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.133089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.133096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.133103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.133124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.143169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.143227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.143249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.143256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.143263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.143277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.153270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.153332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.153347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.153354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.153360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.153375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.163242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.163296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.163310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.163318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.163326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.163340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.173262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.173316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.173330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.173338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.173344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.173358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.183243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.183302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.183320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.183328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.183334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.183349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.193334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.548 [2024-07-15 13:15:23.193400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.548 [2024-07-15 13:15:23.193415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.548 [2024-07-15 13:15:23.193422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.548 [2024-07-15 13:15:23.193428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.548 [2024-07-15 13:15:23.193443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.548 qpair failed and we were unable to recover it. 00:30:01.548 [2024-07-15 13:15:23.203391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.203450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.203465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.203474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.203480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.203494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.213373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.213481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.213496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.213503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.213509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.213523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.223383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.223440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.223454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.223461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.223470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.223485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.233458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.233521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.233535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.233542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.233548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.233563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.243491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.243549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.243563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.243570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.243577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.243591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.253465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.253534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.253548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.253555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.253562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.253577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.263512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.263567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.263581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.263588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.263595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.263609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.273564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.273660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.273675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.273682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.273689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.273703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.283592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.283650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.283664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.283672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.283678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.283692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.293571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.293628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.293643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.293650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.549 [2024-07-15 13:15:23.293656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.549 [2024-07-15 13:15:23.293670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.549 qpair failed and we were unable to recover it. 00:30:01.549 [2024-07-15 13:15:23.303615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.549 [2024-07-15 13:15:23.303674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.549 [2024-07-15 13:15:23.303688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.549 [2024-07-15 13:15:23.303695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.303701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.303715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.313700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.313763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.313777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.313784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.313794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.313808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.323685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.323787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.323802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.323809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.323816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.323830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.333571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.333644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.333658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.333665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.333671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.333685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.343787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.343844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.343858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.343866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.343872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.343886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.353784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.353848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.353863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.353871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.353877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.353891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.550 [2024-07-15 13:15:23.363784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.550 [2024-07-15 13:15:23.363851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.550 [2024-07-15 13:15:23.363866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.550 [2024-07-15 13:15:23.363873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.550 [2024-07-15 13:15:23.363879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.550 [2024-07-15 13:15:23.363894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.550 qpair failed and we were unable to recover it. 00:30:01.813 [2024-07-15 13:15:23.373676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.373734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.373748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.373755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.373761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.373776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.383821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.383882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.383896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.383903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.383909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.383924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.393882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.393949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.393973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.393982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.393990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.394009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.403985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.404043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.404060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.404072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.404079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.404094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.413905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.413965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.413989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.413997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.414005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.414024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.423925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.423988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.424012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.424021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.424029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.424047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.434053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.434130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.434154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.434163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.434170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.434189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.444014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.444075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.444092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.444100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.444106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.444122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.454047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.454104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.454119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.454126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.454133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.454147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.464044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.464100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.464115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.464122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.464128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.464143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.474165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.474277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.474293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.474300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.474307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.474321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.484131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.484241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.484256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.484264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.484270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.484285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.494189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.494247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.494265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.494272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.494278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.494292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.504139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.504202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.504217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.504224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.504235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.504250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.514221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.514297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.514311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.514318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.514325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.514339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.524235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.524292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.524306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.524313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.524320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.524334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.534213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.534271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.534286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.534293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.534299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.534318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.544145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.544200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.544214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.544222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.544228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.544246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.554365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.554445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.554459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.554467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.554474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.554488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.564326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.564395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.564408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.564416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.564422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.564437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.574214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.574267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.574282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.574289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.574295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.814 [2024-07-15 13:15:23.574316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.814 qpair failed and we were unable to recover it. 00:30:01.814 [2024-07-15 13:15:23.584359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.814 [2024-07-15 13:15:23.584438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.814 [2024-07-15 13:15:23.584456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.814 [2024-07-15 13:15:23.584464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.814 [2024-07-15 13:15:23.584471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.584485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:01.815 [2024-07-15 13:15:23.594426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.815 [2024-07-15 13:15:23.594509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.815 [2024-07-15 13:15:23.594523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.815 [2024-07-15 13:15:23.594530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.815 [2024-07-15 13:15:23.594537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.594551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:01.815 [2024-07-15 13:15:23.604454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.815 [2024-07-15 13:15:23.604513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.815 [2024-07-15 13:15:23.604526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.815 [2024-07-15 13:15:23.604534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.815 [2024-07-15 13:15:23.604540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.604554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:01.815 [2024-07-15 13:15:23.614432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.815 [2024-07-15 13:15:23.614551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.815 [2024-07-15 13:15:23.614566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.815 [2024-07-15 13:15:23.614573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.815 [2024-07-15 13:15:23.614579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.614593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:01.815 [2024-07-15 13:15:23.624485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.815 [2024-07-15 13:15:23.624542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.815 [2024-07-15 13:15:23.624556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.815 [2024-07-15 13:15:23.624564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.815 [2024-07-15 13:15:23.624574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.624588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:01.815 [2024-07-15 13:15:23.634431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.815 [2024-07-15 13:15:23.634498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.815 [2024-07-15 13:15:23.634514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.815 [2024-07-15 13:15:23.634523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.815 [2024-07-15 13:15:23.634530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:01.815 [2024-07-15 13:15:23.634545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:01.815 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.644560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.644620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.644635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.644642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.644649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.644663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.654554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.654633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.654647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.654656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.654662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.654676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.664578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.664641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.664655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.664663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.664669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.664683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.674643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.674748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.674763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.674770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.674777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.674791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.684621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.684675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.684689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.684696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.684703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.684717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.694662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.694721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.694735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.694742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.694749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.694763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.704680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.704740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.704754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.704762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.704768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.704782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.714746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.714808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.714822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.714829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.714839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.714853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.724725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.076 [2024-07-15 13:15:23.724807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.076 [2024-07-15 13:15:23.724821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.076 [2024-07-15 13:15:23.724829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.076 [2024-07-15 13:15:23.724835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.076 [2024-07-15 13:15:23.724849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.076 qpair failed and we were unable to recover it. 00:30:02.076 [2024-07-15 13:15:23.734803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.734888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.734902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.734910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.734917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.734931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.744776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.744836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.744850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.744858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.744864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.744878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.754866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.754972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.754987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.754995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.755001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.755015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.764832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.764937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.764952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.764959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.764966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.764980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.774854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.774919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.774943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.774952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.774960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.774978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.784907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.784991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.785006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.785015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.785022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.785037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.794926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.794998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.795022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.795031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.795039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.795058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.804958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.805021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.805037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.805049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.805056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.805072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.814977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.815031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.815046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.815054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.815060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.815074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.825095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.825152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.825166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.825173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.825180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.825194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.835069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.835155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.835171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.835181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.835188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.835203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.845133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.845253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.845269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.845276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.845282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.845297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.855098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.855159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.855174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.855181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.855188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.855202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.865115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.865183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.865198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.865205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.077 [2024-07-15 13:15:23.865212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.077 [2024-07-15 13:15:23.865227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.077 qpair failed and we were unable to recover it. 00:30:02.077 [2024-07-15 13:15:23.875171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.077 [2024-07-15 13:15:23.875241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.077 [2024-07-15 13:15:23.875256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.077 [2024-07-15 13:15:23.875263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.078 [2024-07-15 13:15:23.875270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.078 [2024-07-15 13:15:23.875284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.078 qpair failed and we were unable to recover it. 00:30:02.078 [2024-07-15 13:15:23.885173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.078 [2024-07-15 13:15:23.885271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.078 [2024-07-15 13:15:23.885287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.078 [2024-07-15 13:15:23.885294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.078 [2024-07-15 13:15:23.885301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.078 [2024-07-15 13:15:23.885316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.078 qpair failed and we were unable to recover it. 00:30:02.078 [2024-07-15 13:15:23.895145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.078 [2024-07-15 13:15:23.895199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.078 [2024-07-15 13:15:23.895220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.078 [2024-07-15 13:15:23.895235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.078 [2024-07-15 13:15:23.895242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.078 [2024-07-15 13:15:23.895257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.078 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.905233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.905301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.905320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.905327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.905334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.905349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.915299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.915362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.915377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.915385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.915391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.915406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.925277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.925333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.925348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.925355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.925362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.925377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.935303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.935370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.935384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.935392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.935398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.935416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.945343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.945439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.945454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.945461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.945468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.945482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.955401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.955463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.955477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.955485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.955491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.955505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.965397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.965453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.965467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.965475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.965481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.965495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.975417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.975474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.975488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.975495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.975502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.975516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.985322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.985377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.985395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.985403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.985409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.985424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:23.995506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:23.995568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:23.995583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:23.995590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:23.995597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:23.995611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.339 [2024-07-15 13:15:24.005486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.339 [2024-07-15 13:15:24.005542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.339 [2024-07-15 13:15:24.005556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.339 [2024-07-15 13:15:24.005564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.339 [2024-07-15 13:15:24.005570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.339 [2024-07-15 13:15:24.005584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.339 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.015518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.015578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.015592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.015600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.015607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.015621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.025426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.025484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.025498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.025505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.025511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.025529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.035598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.035666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.035681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.035689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.035698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.035713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.045585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.045636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.045651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.045658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.045665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.045679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.055616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.055669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.055683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.055690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.055697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.055711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.065652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.065725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.065742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.065749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.065756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.065771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.075714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.075796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.075811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.075818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.075827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.075841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.085700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.085753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.085767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.085775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.085781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.085795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.095610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.095671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.095686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.095694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.095700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.095715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.105746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.105802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.105817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.105824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.105831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.105845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.115816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.115877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.115892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.115899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.115909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.115924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.125816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.125867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.125882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.125889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.125895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.125910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.135859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.135914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.135929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.135936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.135943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.135957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.145876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.340 [2024-07-15 13:15:24.145979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.340 [2024-07-15 13:15:24.145994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.340 [2024-07-15 13:15:24.146001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.340 [2024-07-15 13:15:24.146008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.340 [2024-07-15 13:15:24.146022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.340 qpair failed and we were unable to recover it. 00:30:02.340 [2024-07-15 13:15:24.155938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.341 [2024-07-15 13:15:24.156005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.341 [2024-07-15 13:15:24.156019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.341 [2024-07-15 13:15:24.156026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.341 [2024-07-15 13:15:24.156033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.341 [2024-07-15 13:15:24.156047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.341 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.165923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.165979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.165994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.166001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.166007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.166021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.175951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.176006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.176020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.176027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.176033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.176047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.185975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.186030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.186044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.186052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.186059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.186074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.196049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.196108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.196123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.196130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.196137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.196151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.206000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.206060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.206074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.206085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.206091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.206105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.215979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.216034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.216048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.216056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.216063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.216082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.226099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.226158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.226172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.226179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.226186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.226200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.236155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.236217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.236237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.236245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.236252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.236266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.246148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.246202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.246217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.246224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.246236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.246251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.256183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.256282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.256297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.256304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.256310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.256325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.266200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.266261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.266275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.266283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.266289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.266303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.276280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.276343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.276358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.276365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.276371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.276386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.286258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.286364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.286379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.286386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.286392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.286408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.296273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.603 [2024-07-15 13:15:24.296330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.603 [2024-07-15 13:15:24.296347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.603 [2024-07-15 13:15:24.296354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.603 [2024-07-15 13:15:24.296361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.603 [2024-07-15 13:15:24.296376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.603 qpair failed and we were unable to recover it. 00:30:02.603 [2024-07-15 13:15:24.306268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.306320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.306335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.306342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.306348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.306362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.316263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.316331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.316346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.316354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.316361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.316375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.326377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.326428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.326443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.326449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.326456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.326470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.336377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.336436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.336451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.336458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.336464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.336478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.346337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.346393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.346408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.346415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.346421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.346436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.356473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.356548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.356562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.356570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.356578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.356593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.366466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.366523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.366537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.366544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.366550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.366564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.376551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.376627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.376642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.376649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.376656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.376670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.386535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.386592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.386609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.386616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.386622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.386636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.396599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.396663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.396677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.396684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.396690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.396704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.406582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.406646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.406660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.406667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.406674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.406688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.604 [2024-07-15 13:15:24.416486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.604 [2024-07-15 13:15:24.416542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.604 [2024-07-15 13:15:24.416556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.604 [2024-07-15 13:15:24.416563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.604 [2024-07-15 13:15:24.416569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.604 [2024-07-15 13:15:24.416583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.604 qpair failed and we were unable to recover it. 00:30:02.917 [2024-07-15 13:15:24.426683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.917 [2024-07-15 13:15:24.426740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.917 [2024-07-15 13:15:24.426755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.917 [2024-07-15 13:15:24.426763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.917 [2024-07-15 13:15:24.426769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.917 [2024-07-15 13:15:24.426790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.917 qpair failed and we were unable to recover it. 00:30:02.917 [2024-07-15 13:15:24.436709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.917 [2024-07-15 13:15:24.436772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.917 [2024-07-15 13:15:24.436787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.917 [2024-07-15 13:15:24.436794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.917 [2024-07-15 13:15:24.436801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.917 [2024-07-15 13:15:24.436815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.917 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.446688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.446742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.446756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.446764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.446770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.446785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.456721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.456779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.456793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.456800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.456807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.456821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.466751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.466806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.466820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.466827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.466834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.466848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.476693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.476761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.476778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.476785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.476791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.476806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.486785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.486847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.486861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.486868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.486875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.486889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.496893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.496950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.496964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.496971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.496977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.496991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.506847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.506916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.506940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.506949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.506957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.506976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.516869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.516956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.516981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.516989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.517001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.517020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.526899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.526958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.526974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.526981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.526988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.527004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.536921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.536986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.537001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.537008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.537015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.537031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.546951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.547006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.547021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.547028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.547035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.547049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.557029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.557093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.557107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.557114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.557121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.557135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.567057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.567138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.567153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.567161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.567168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.567182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.576918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.576976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.576992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.576999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.577006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.577026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.587065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.587131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.587146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.587153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.587159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.587174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.918 [2024-07-15 13:15:24.597161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.918 [2024-07-15 13:15:24.597239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.918 [2024-07-15 13:15:24.597253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.918 [2024-07-15 13:15:24.597260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.918 [2024-07-15 13:15:24.597267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.918 [2024-07-15 13:15:24.597282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.918 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.607112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.607166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.607180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.607191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.607198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.607212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.617140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.617192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.617207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.617214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.617221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.617239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.627167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.627226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.627243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.627251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.627258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.627273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.637242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.637308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.637322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.637330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.637336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.637350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.647237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.647290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.647304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.647312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.647318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.647333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.657247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.657305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.657319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.657326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.657333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.657348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.667331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.667421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.667435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.667442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.667449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.667463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.677358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.677421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.677435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.677442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.677449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.677463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:02.919 [2024-07-15 13:15:24.687320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.919 [2024-07-15 13:15:24.687378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.919 [2024-07-15 13:15:24.687393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.919 [2024-07-15 13:15:24.687400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.919 [2024-07-15 13:15:24.687406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:02.919 [2024-07-15 13:15:24.687421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.919 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.697358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.697466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.697480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.697491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.697498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.697513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.707372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.707427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.707441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.707448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.707455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.707469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.717450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.717518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.717532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.717539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.717545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.717559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.727363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.727419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.727433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.727441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.727447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.727461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.737472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.737529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.737543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.737550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.737556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.737570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.747484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.747547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.747562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.747569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.747575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.747590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.757540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.757612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.757626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.757633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.757639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.757654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.767533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.767589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.767604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.767611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.767617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.767631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.777617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.777671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.777685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.777692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.777698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.777712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.787484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.787538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.787556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.787563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.787569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.787584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.797565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.797630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.797644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.797652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.797658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.797672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.807684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.807742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.807756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.807763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.807769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.807783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.817691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.817752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.817766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.817773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.817779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.817793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.827726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.827817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.827830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.827838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.827845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.827862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.837735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.837792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.837806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.837814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.837820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.837834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.847779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.847868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.847882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.847890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.847896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.847911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.857670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.857723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.857737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.857744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.857751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.857765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.867892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.867960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.867974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.867982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.867988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.868002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.877758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.877820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.877837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.180 [2024-07-15 13:15:24.877844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.180 [2024-07-15 13:15:24.877851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.180 [2024-07-15 13:15:24.877865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.180 qpair failed and we were unable to recover it. 00:30:03.180 [2024-07-15 13:15:24.887898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.180 [2024-07-15 13:15:24.888000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.180 [2024-07-15 13:15:24.888016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.888023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.888029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.888044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.897845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.897919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.897942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.897951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.897958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.897976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.907968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.908022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.908038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.908046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.908052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.908067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.917983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.918038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.918052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.918060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.918070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.918085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.927905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.928012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.928026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.928034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.928040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.928055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.938025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.938130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.938145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.938153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.938159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.938173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.948053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.948108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.948122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.948130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.948136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.948150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.957953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.958017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.958031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.958039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.958045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.958059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.968107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.968166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.968180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.968188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.968194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.968208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.978134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.978192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.978206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.978213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.978220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.978239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.988067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.988124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.988138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.988146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.988152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.988167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.181 [2024-07-15 13:15:24.998173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.181 [2024-07-15 13:15:24.998236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.181 [2024-07-15 13:15:24.998250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.181 [2024-07-15 13:15:24.998258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.181 [2024-07-15 13:15:24.998264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.181 [2024-07-15 13:15:24.998279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.181 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.008212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.008269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.008284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.008292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.008302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.008317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.018190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.018258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.018273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.018280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.018287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.018302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.028261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.028314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.028328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.028335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.028342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.028356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.038175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.038238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.038253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.038260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.038266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.038287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.048383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.048442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.048457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.048464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.048470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.048485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.058364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.058416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.058431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.058439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.058445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.058460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.068412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.068465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.068479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.068486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.068492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.068506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.078410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.078515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.078530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.078537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.078544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.078558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.088429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.088482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.088496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.088503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.088510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.088524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.098534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.098588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.098602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.442 [2024-07-15 13:15:25.098613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.442 [2024-07-15 13:15:25.098619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.442 [2024-07-15 13:15:25.098633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.442 qpair failed and we were unable to recover it. 00:30:03.442 [2024-07-15 13:15:25.108500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.442 [2024-07-15 13:15:25.108553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.442 [2024-07-15 13:15:25.108567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.108574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.108580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.108594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.118521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.118579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.118593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.118601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.118607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.118621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.128403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.128461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.128475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.128482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.128488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.128502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.138551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.138605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.138619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.138626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.138633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.138647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.148586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.148642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.148656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.148663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.148669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.148683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.158614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.158673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.158687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.158694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.158700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.158715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.168626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.168682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.168695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.168703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.168709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.168723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.178657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.178725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.178745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.178753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.178760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.178777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.188570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.188625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.188643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.188650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.188656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.188670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.198696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.198757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.198771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.198778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.198784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.198798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.208709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.208763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.208777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.208785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.208792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.208806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.218767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.218819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.218833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.218841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.218847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.218861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.228765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.228818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.228833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.228840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.228846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.228864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.238822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.238878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.238892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.238899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.238905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.238919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.248847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.248898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.248913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.248920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.443 [2024-07-15 13:15:25.248926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.443 [2024-07-15 13:15:25.248941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.443 qpair failed and we were unable to recover it. 00:30:03.443 [2024-07-15 13:15:25.258862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.443 [2024-07-15 13:15:25.258923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.443 [2024-07-15 13:15:25.258937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.443 [2024-07-15 13:15:25.258945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.444 [2024-07-15 13:15:25.258951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.444 [2024-07-15 13:15:25.258965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.444 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-15 13:15:25.268884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.704 [2024-07-15 13:15:25.268940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.704 [2024-07-15 13:15:25.268954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.704 [2024-07-15 13:15:25.268961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.704 [2024-07-15 13:15:25.268968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.704 [2024-07-15 13:15:25.268982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-15 13:15:25.278939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.704 [2024-07-15 13:15:25.279045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.704 [2024-07-15 13:15:25.279064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.704 [2024-07-15 13:15:25.279071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.704 [2024-07-15 13:15:25.279077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.704 [2024-07-15 13:15:25.279091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-15 13:15:25.288950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.704 [2024-07-15 13:15:25.289005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.704 [2024-07-15 13:15:25.289019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.704 [2024-07-15 13:15:25.289026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.704 [2024-07-15 13:15:25.289032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.704 [2024-07-15 13:15:25.289046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-15 13:15:25.298987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.704 [2024-07-15 13:15:25.299067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.704 [2024-07-15 13:15:25.299081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.704 [2024-07-15 13:15:25.299088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.704 [2024-07-15 13:15:25.299094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.704 [2024-07-15 13:15:25.299108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.704 qpair failed and we were unable to recover it. 00:30:03.704 [2024-07-15 13:15:25.309013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.309069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.309083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.309090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.309097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.309111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.318906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.318965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.318979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.318987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.318996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.319010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.328978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.329033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.329048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.329055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.329062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.329075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.338957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.339015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.339029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.339038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.339044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.339058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.349089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.349143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.349158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.349165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.349171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.349185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.359159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.359269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.359284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.359291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.359297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.359311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.369161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.369216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.369234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.369242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.369248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.369262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.379151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.379203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.379217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.379224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.379234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe150000b90 00:30:03.705 [2024-07-15 13:15:25.379248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Write completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 Read completed with error (sct=0, sc=8) 00:30:03.705 starting I/O failed 00:30:03.705 [2024-07-15 13:15:25.379598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.705 [2024-07-15 13:15:25.389219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.389278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.389296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.389302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.389307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe158000b90 00:30:03.705 [2024-07-15 13:15:25.389320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.399200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.705 [2024-07-15 13:15:25.399257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.705 [2024-07-15 13:15:25.399269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.705 [2024-07-15 13:15:25.399274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.705 [2024-07-15 13:15:25.399279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe158000b90 00:30:03.705 [2024-07-15 13:15:25.399290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.705 qpair failed and we were unable to recover it. 00:30:03.705 [2024-07-15 13:15:25.399482] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:03.705 A controller has encountered a failure and is being reset. 00:30:03.705 [2024-07-15 13:15:25.399598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70800 (9): Bad file descriptor 00:30:03.965 Controller properly reset. 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Write completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 Read completed with error (sct=0, sc=8) 00:30:03.966 starting I/O failed 00:30:03.966 [2024-07-15 13:15:25.590266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.966 Initializing NVMe Controllers 00:30:03.966 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:03.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:03.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:03.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:03.966 Initialization complete. Launching workers. 00:30:03.966 Starting thread on core 1 00:30:03.966 Starting thread on core 2 00:30:03.966 Starting thread on core 3 00:30:03.966 Starting thread on core 0 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:03.966 00:30:03.966 real 0m11.468s 00:30:03.966 user 0m21.427s 00:30:03.966 sys 0m4.090s 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.966 ************************************ 00:30:03.966 END TEST nvmf_target_disconnect_tc2 00:30:03.966 ************************************ 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.966 rmmod nvme_tcp 00:30:03.966 rmmod nvme_fabrics 00:30:03.966 rmmod nvme_keyring 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 889216 ']' 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 889216 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 889216 ']' 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 889216 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.966 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889216 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889216' 00:30:04.226 killing process with pid 889216 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 889216 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 889216 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.226 13:15:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.766 13:15:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.766 00:30:06.766 real 0m22.519s 00:30:06.766 user 0m49.560s 00:30:06.766 sys 0m10.753s 00:30:06.767 13:15:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.767 13:15:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 ************************************ 00:30:06.767 END TEST nvmf_target_disconnect 00:30:06.767 ************************************ 00:30:06.767 13:15:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:06.767 13:15:28 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:06.767 13:15:28 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:06.767 13:15:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 13:15:28 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:06.767 00:30:06.767 real 23m23.541s 00:30:06.767 user 47m20.142s 00:30:06.767 sys 7m38.183s 00:30:06.767 13:15:28 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.767 13:15:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 ************************************ 00:30:06.767 END TEST nvmf_tcp 00:30:06.767 ************************************ 00:30:06.767 13:15:28 -- common/autotest_common.sh@1142 -- # return 0 00:30:06.767 13:15:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:06.767 13:15:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:06.767 13:15:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:06.767 13:15:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.767 13:15:28 -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 ************************************ 00:30:06.767 START TEST spdkcli_nvmf_tcp 00:30:06.767 ************************************ 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:06.767 * Looking for test storage... 00:30:06.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=891084 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 891084 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 891084 ']' 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.767 13:15:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.767 [2024-07-15 13:15:28.364394] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:30:06.767 [2024-07-15 13:15:28.364443] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891084 ] 00:30:06.767 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.767 [2024-07-15 13:15:28.432107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:06.767 [2024-07-15 13:15:28.497643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.767 [2024-07-15 13:15:28.497645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.338 13:15:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:07.338 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:07.338 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:07.338 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:07.338 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:07.338 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:07.338 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:07.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:07.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:07.338 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:07.338 ' 00:30:09.887 [2024-07-15 13:15:31.475924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.828 [2024-07-15 13:15:32.639698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:13.371 [2024-07-15 13:15:34.777858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:15.283 [2024-07-15 13:15:36.615302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:16.224 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:16.224 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:16.224 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:16.224 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:16.224 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:16.224 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:16.224 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:16.484 13:15:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:16.745 13:15:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:16.745 13:15:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:16.745 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:16.745 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:16.745 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.005 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:17.005 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.005 13:15:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.005 13:15:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:17.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:17.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:17.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:17.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:17.005 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:17.005 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:17.005 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:17.005 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:17.005 ' 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:22.294 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:22.294 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:22.294 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:22.294 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 891084 ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 891084' 00:30:22.294 killing process with pid 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 891084 ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 891084 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 891084 ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 891084 00:30:22.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (891084) - No such process 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 891084 is not found' 00:30:22.294 Process with pid 891084 is not found 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:22.294 00:30:22.294 real 0m15.524s 00:30:22.294 user 0m31.963s 00:30:22.294 sys 0m0.684s 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:22.294 13:15:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.294 ************************************ 00:30:22.294 END TEST spdkcli_nvmf_tcp 00:30:22.294 ************************************ 00:30:22.294 13:15:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:22.294 13:15:43 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:22.294 13:15:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:22.294 13:15:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.294 13:15:43 -- common/autotest_common.sh@10 -- # set +x 00:30:22.294 ************************************ 00:30:22.294 START TEST nvmf_identify_passthru 00:30:22.294 ************************************ 00:30:22.294 13:15:43 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:22.294 * Looking for test storage... 00:30:22.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:22.294 13:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:22.294 13:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:22.294 13:15:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.294 13:15:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.294 13:15:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:22.294 13:15:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:22.294 13:15:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:22.294 13:15:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.443 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:30.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:30.444 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:30.444 Found net devices under 0000:31:00.0: cvl_0_0 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:30.444 Found net devices under 0000:31:00.1: cvl_0_1 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:30.444 13:15:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:30.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:30.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:30:30.444 00:30:30.444 --- 10.0.0.2 ping statistics --- 00:30:30.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.444 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:30.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:30.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:30:30.444 00:30:30.444 --- 10.0.0.1 ping statistics --- 00:30:30.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:30.444 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.444 13:15:52 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.444 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.444 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:30.444 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:30.705 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:30.705 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:30.705 13:15:52 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:30.705 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:30.705 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:30.705 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:30.705 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:30.705 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:30.705 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.275 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:30:31.275 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:31.275 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:31.275 13:15:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:31.275 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=898530 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:31.543 13:15:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 898530 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 898530 ']' 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.543 13:15:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.806 [2024-07-15 13:15:53.387653] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:30:31.806 [2024-07-15 13:15:53.387714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.806 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.806 [2024-07-15 13:15:53.465209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:31.806 [2024-07-15 13:15:53.537149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.806 [2024-07-15 13:15:53.537190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.806 [2024-07-15 13:15:53.537198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.806 [2024-07-15 13:15:53.537205] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.806 [2024-07-15 13:15:53.537210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.806 [2024-07-15 13:15:53.537352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.806 [2024-07-15 13:15:53.537528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.806 [2024-07-15 13:15:53.537684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.806 [2024-07-15 13:15:53.537685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:32.376 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.376 INFO: Log level set to 20 00:30:32.376 INFO: Requests: 00:30:32.376 { 00:30:32.376 "jsonrpc": "2.0", 00:30:32.376 "method": "nvmf_set_config", 00:30:32.376 "id": 1, 00:30:32.376 "params": { 00:30:32.376 "admin_cmd_passthru": { 00:30:32.376 "identify_ctrlr": true 00:30:32.376 } 00:30:32.376 } 00:30:32.376 } 00:30:32.376 00:30:32.376 INFO: response: 00:30:32.376 { 00:30:32.376 "jsonrpc": "2.0", 00:30:32.376 "id": 1, 00:30:32.376 "result": true 00:30:32.376 } 00:30:32.376 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.376 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.376 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.376 INFO: Setting log level to 20 00:30:32.376 INFO: Setting log level to 20 00:30:32.376 INFO: Log level set to 20 00:30:32.376 INFO: Log level set to 20 00:30:32.376 INFO: Requests: 00:30:32.376 { 00:30:32.376 "jsonrpc": "2.0", 00:30:32.376 "method": "framework_start_init", 00:30:32.376 "id": 1 00:30:32.376 } 00:30:32.376 00:30:32.376 INFO: Requests: 00:30:32.376 { 00:30:32.376 "jsonrpc": "2.0", 00:30:32.376 "method": "framework_start_init", 00:30:32.376 "id": 1 00:30:32.376 } 00:30:32.376 00:30:32.636 [2024-07-15 13:15:54.233660] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:32.636 INFO: response: 00:30:32.636 { 00:30:32.636 "jsonrpc": "2.0", 00:30:32.636 "id": 1, 00:30:32.636 "result": true 00:30:32.636 } 00:30:32.636 00:30:32.636 INFO: response: 00:30:32.636 { 00:30:32.636 "jsonrpc": "2.0", 00:30:32.636 "id": 1, 00:30:32.636 "result": true 00:30:32.636 } 00:30:32.636 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.636 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.636 INFO: Setting log level to 40 00:30:32.636 INFO: Setting log level to 40 00:30:32.636 INFO: Setting log level to 40 00:30:32.636 [2024-07-15 13:15:54.246988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.636 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.636 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.636 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.896 Nvme0n1 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.896 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.896 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.896 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.896 [2024-07-15 13:15:54.637578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.896 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.896 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.896 [ 00:30:32.896 { 00:30:32.896 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:32.896 "subtype": "Discovery", 00:30:32.896 "listen_addresses": [], 00:30:32.896 "allow_any_host": true, 00:30:32.896 "hosts": [] 00:30:32.896 }, 00:30:32.896 { 00:30:32.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:32.896 "subtype": "NVMe", 00:30:32.896 "listen_addresses": [ 00:30:32.896 { 00:30:32.896 "trtype": "TCP", 00:30:32.896 "adrfam": "IPv4", 00:30:32.896 "traddr": "10.0.0.2", 00:30:32.896 "trsvcid": "4420" 00:30:32.896 } 00:30:32.896 ], 00:30:32.896 "allow_any_host": true, 00:30:32.896 "hosts": [], 00:30:32.896 "serial_number": "SPDK00000000000001", 00:30:32.896 "model_number": "SPDK bdev Controller", 00:30:32.897 "max_namespaces": 1, 00:30:32.897 "min_cntlid": 1, 00:30:32.897 "max_cntlid": 65519, 00:30:32.897 "namespaces": [ 00:30:32.897 { 00:30:32.897 "nsid": 1, 00:30:32.897 "bdev_name": "Nvme0n1", 00:30:32.897 "name": "Nvme0n1", 00:30:32.897 "nguid": "3634473052605494002538450000002B", 00:30:32.897 "uuid": "36344730-5260-5494-0025-38450000002b" 00:30:32.897 } 00:30:32.897 ] 00:30:32.897 } 00:30:32.897 ] 00:30:32.897 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.897 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:32.897 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:32.897 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:32.897 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:33.157 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.157 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.157 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:33.157 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:33.157 13:15:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:33.157 rmmod nvme_tcp 00:30:33.157 rmmod nvme_fabrics 00:30:33.157 rmmod nvme_keyring 00:30:33.157 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:33.418 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:33.418 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:33.418 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 898530 ']' 00:30:33.418 13:15:54 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 898530 00:30:33.418 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 898530 ']' 00:30:33.418 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 898530 00:30:33.418 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:33.418 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:33.418 13:15:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 898530 00:30:33.418 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:33.418 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:33.418 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 898530' 00:30:33.418 killing process with pid 898530 00:30:33.418 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 898530 00:30:33.418 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 898530 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:33.708 13:15:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.708 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:33.708 13:15:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.626 13:15:57 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:35.626 00:30:35.626 real 0m13.606s 00:30:35.626 user 0m9.765s 00:30:35.626 sys 0m6.841s 00:30:35.626 13:15:57 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.627 13:15:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.627 ************************************ 00:30:35.627 END TEST nvmf_identify_passthru 00:30:35.627 ************************************ 00:30:35.627 13:15:57 -- common/autotest_common.sh@1142 -- # return 0 00:30:35.627 13:15:57 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:35.627 13:15:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:35.627 13:15:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.627 13:15:57 -- common/autotest_common.sh@10 -- # set +x 00:30:35.888 ************************************ 00:30:35.888 START TEST nvmf_dif 00:30:35.888 ************************************ 00:30:35.888 13:15:57 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:35.888 * Looking for test storage... 00:30:35.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.888 13:15:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.888 13:15:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.889 13:15:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.889 13:15:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.889 13:15:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.889 13:15:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.889 13:15:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.889 13:15:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.889 13:15:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:35.889 13:15:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:35.889 13:15:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:35.889 13:15:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:35.889 13:15:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:35.889 13:15:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:35.889 13:15:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.889 13:15:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.889 13:15:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:35.889 13:15:57 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:35.889 13:15:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:44.034 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:44.034 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:44.034 Found net devices under 0000:31:00.0: cvl_0_0 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:44.034 Found net devices under 0000:31:00.1: cvl_0_1 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:44.034 13:16:05 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:44.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:30:44.035 00:30:44.035 --- 10.0.0.2 ping statistics --- 00:30:44.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.035 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:30:44.035 00:30:44.035 --- 10.0.0.1 ping statistics --- 00:30:44.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.035 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:44.035 13:16:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:48.244 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:48.244 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:48.244 13:16:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:48.244 13:16:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=905182 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 905182 00:30:48.244 13:16:09 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 905182 ']' 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.244 13:16:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.244 [2024-07-15 13:16:09.806239] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:30:48.245 [2024-07-15 13:16:09.806307] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.245 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.245 [2024-07-15 13:16:09.887264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.245 [2024-07-15 13:16:09.960031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.245 [2024-07-15 13:16:09.960072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.245 [2024-07-15 13:16:09.960080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.245 [2024-07-15 13:16:09.960087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.245 [2024-07-15 13:16:09.960093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.245 [2024-07-15 13:16:09.960114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:48.813 13:16:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:48.813 13:16:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.813 13:16:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:48.813 13:16:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.813 13:16:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 [2024-07-15 13:16:10.639695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.074 13:16:10 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.074 13:16:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:49.074 13:16:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:49.074 13:16:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.074 13:16:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 ************************************ 00:30:49.074 START TEST fio_dif_1_default 00:30:49.074 ************************************ 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 bdev_null0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:49.074 [2024-07-15 13:16:10.724030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:49.074 { 00:30:49.074 "params": { 00:30:49.074 "name": "Nvme$subsystem", 00:30:49.074 "trtype": "$TEST_TRANSPORT", 00:30:49.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.074 "adrfam": "ipv4", 00:30:49.074 "trsvcid": "$NVMF_PORT", 00:30:49.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.074 "hdgst": ${hdgst:-false}, 00:30:49.074 "ddgst": ${ddgst:-false} 00:30:49.074 }, 00:30:49.074 "method": "bdev_nvme_attach_controller" 00:30:49.074 } 00:30:49.074 EOF 00:30:49.074 )") 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:49.074 "params": { 00:30:49.074 "name": "Nvme0", 00:30:49.074 "trtype": "tcp", 00:30:49.074 "traddr": "10.0.0.2", 00:30:49.074 "adrfam": "ipv4", 00:30:49.074 "trsvcid": "4420", 00:30:49.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.074 "hdgst": false, 00:30:49.074 "ddgst": false 00:30:49.074 }, 00:30:49.074 "method": "bdev_nvme_attach_controller" 00:30:49.074 }' 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:49.074 13:16:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.332 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:49.332 fio-3.35 00:30:49.332 Starting 1 thread 00:30:49.592 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.819 00:31:01.819 filename0: (groupid=0, jobs=1): err= 0: pid=905715: Mon Jul 15 13:16:21 2024 00:31:01.819 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10041msec) 00:31:01.819 slat (nsec): min=5406, max=31691, avg=6368.10, stdev=2280.34 00:31:01.819 clat (usec): min=919, max=42642, avg=41643.78, stdev=3694.08 00:31:01.819 lat (usec): min=943, max=42666, avg=41650.15, stdev=3692.77 00:31:01.819 clat percentiles (usec): 00:31:01.819 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:01.819 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:01.819 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:01.819 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:01.819 | 99.99th=[42730] 00:31:01.819 bw ( KiB/s): min= 352, max= 416, per=99.99%, avg=384.00, stdev=10.38, samples=20 00:31:01.819 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:31:01.819 lat (usec) : 1000=0.41% 00:31:01.819 lat (msec) : 2=0.41%, 50=99.17% 00:31:01.819 cpu : usr=95.34%, sys=4.32%, ctx=31, majf=0, minf=227 00:31:01.819 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.819 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.819 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:01.819 00:31:01.819 Run status group 0 (all jobs): 00:31:01.819 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3856KiB (3949kB), run=10041-10041msec 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.819 00:31:01.819 real 0m11.288s 00:31:01.819 user 0m27.182s 00:31:01.819 sys 0m0.775s 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:01.819 13:16:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 ************************************ 00:31:01.819 END TEST fio_dif_1_default 00:31:01.819 ************************************ 00:31:01.819 13:16:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:01.819 13:16:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:01.819 13:16:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:01.819 13:16:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.819 13:16:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 ************************************ 00:31:01.819 START TEST fio_dif_1_multi_subsystems 00:31:01.819 ************************************ 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 bdev_null0 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.819 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 [2024-07-15 13:16:22.092555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 bdev_null1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.820 { 00:31:01.820 "params": { 00:31:01.820 "name": "Nvme$subsystem", 00:31:01.820 "trtype": "$TEST_TRANSPORT", 00:31:01.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.820 "adrfam": "ipv4", 00:31:01.820 "trsvcid": "$NVMF_PORT", 00:31:01.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.820 "hdgst": ${hdgst:-false}, 00:31:01.820 "ddgst": ${ddgst:-false} 00:31:01.820 }, 00:31:01.820 "method": "bdev_nvme_attach_controller" 00:31:01.820 } 00:31:01.820 EOF 00:31:01.820 )") 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.820 { 00:31:01.820 "params": { 00:31:01.820 "name": "Nvme$subsystem", 00:31:01.820 "trtype": "$TEST_TRANSPORT", 00:31:01.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.820 "adrfam": "ipv4", 00:31:01.820 "trsvcid": "$NVMF_PORT", 00:31:01.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.820 "hdgst": ${hdgst:-false}, 00:31:01.820 "ddgst": ${ddgst:-false} 00:31:01.820 }, 00:31:01.820 "method": "bdev_nvme_attach_controller" 00:31:01.820 } 00:31:01.820 EOF 00:31:01.820 )") 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.820 "params": { 00:31:01.820 "name": "Nvme0", 00:31:01.820 "trtype": "tcp", 00:31:01.820 "traddr": "10.0.0.2", 00:31:01.820 "adrfam": "ipv4", 00:31:01.820 "trsvcid": "4420", 00:31:01.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.820 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.820 "hdgst": false, 00:31:01.820 "ddgst": false 00:31:01.820 }, 00:31:01.820 "method": "bdev_nvme_attach_controller" 00:31:01.820 },{ 00:31:01.820 "params": { 00:31:01.820 "name": "Nvme1", 00:31:01.820 "trtype": "tcp", 00:31:01.820 "traddr": "10.0.0.2", 00:31:01.820 "adrfam": "ipv4", 00:31:01.820 "trsvcid": "4420", 00:31:01.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:01.820 "hdgst": false, 00:31:01.820 "ddgst": false 00:31:01.820 }, 00:31:01.820 "method": "bdev_nvme_attach_controller" 00:31:01.820 }' 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.820 13:16:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.820 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:01.820 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:01.820 fio-3.35 00:31:01.820 Starting 2 threads 00:31:01.820 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.813 00:31:11.813 filename0: (groupid=0, jobs=1): err= 0: pid=907922: Mon Jul 15 13:16:33 2024 00:31:11.813 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10013msec) 00:31:11.813 slat (nsec): min=5416, max=27772, avg=6208.75, stdev=1386.61 00:31:11.813 clat (usec): min=812, max=43011, avg=21562.82, stdev=20456.87 00:31:11.813 lat (usec): min=817, max=43039, avg=21569.03, stdev=20456.85 00:31:11.813 clat percentiles (usec): 00:31:11.813 | 1.00th=[ 848], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1020], 00:31:11.813 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41681], 00:31:11.813 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:11.813 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:11.813 | 99.99th=[43254] 00:31:11.813 bw ( KiB/s): min= 672, max= 768, per=66.05%, avg=740.80, stdev=34.86, samples=20 00:31:11.813 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:11.813 lat (usec) : 1000=10.99% 00:31:11.813 lat (msec) : 2=38.79%, 50=50.22% 00:31:11.813 cpu : usr=96.73%, sys=3.07%, ctx=11, majf=0, minf=117 00:31:11.813 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.813 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.813 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:11.813 filename1: (groupid=0, jobs=1): err= 0: pid=907923: Mon Jul 15 13:16:33 2024 00:31:11.813 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:31:11.813 slat (nsec): min=5417, max=28070, avg=6451.42, stdev=1767.54 00:31:11.813 clat (usec): min=40961, max=43027, avg=41990.59, stdev=182.68 00:31:11.813 lat (usec): min=40969, max=43033, avg=41997.04, stdev=182.84 00:31:11.813 clat percentiles (usec): 00:31:11.813 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:11.813 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:11.813 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:11.813 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:11.813 | 99.99th=[43254] 00:31:11.813 bw ( KiB/s): min= 352, max= 384, per=33.92%, avg=380.80, stdev= 9.85, samples=20 00:31:11.813 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:11.813 lat (msec) : 50=100.00% 00:31:11.813 cpu : usr=96.53%, sys=3.27%, ctx=14, majf=0, minf=130 00:31:11.813 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.813 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.813 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:11.813 00:31:11.813 Run status group 0 (all jobs): 00:31:11.813 READ: bw=1120KiB/s (1147kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10013-10040msec 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 00:31:11.813 real 0m11.340s 00:31:11.813 user 0m32.500s 00:31:11.813 sys 0m0.955s 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 ************************************ 00:31:11.813 END TEST fio_dif_1_multi_subsystems 00:31:11.813 ************************************ 00:31:11.813 13:16:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:11.813 13:16:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:11.813 13:16:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:11.813 13:16:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 ************************************ 00:31:11.813 START TEST fio_dif_rand_params 00:31:11.813 ************************************ 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 bdev_null0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.813 [2024-07-15 13:16:33.511932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.813 { 00:31:11.813 "params": { 00:31:11.813 "name": "Nvme$subsystem", 00:31:11.813 "trtype": "$TEST_TRANSPORT", 00:31:11.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.813 "adrfam": "ipv4", 00:31:11.813 "trsvcid": "$NVMF_PORT", 00:31:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.813 "hdgst": ${hdgst:-false}, 00:31:11.813 "ddgst": ${ddgst:-false} 00:31:11.813 }, 00:31:11.813 "method": "bdev_nvme_attach_controller" 00:31:11.813 } 00:31:11.813 EOF 00:31:11.813 )") 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:11.813 "params": { 00:31:11.813 "name": "Nvme0", 00:31:11.813 "trtype": "tcp", 00:31:11.813 "traddr": "10.0.0.2", 00:31:11.813 "adrfam": "ipv4", 00:31:11.813 "trsvcid": "4420", 00:31:11.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.813 "hdgst": false, 00:31:11.813 "ddgst": false 00:31:11.813 }, 00:31:11.813 "method": "bdev_nvme_attach_controller" 00:31:11.813 }' 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:11.813 13:16:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.388 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:12.388 ... 00:31:12.388 fio-3.35 00:31:12.388 Starting 3 threads 00:31:12.388 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.680 00:31:17.680 filename0: (groupid=0, jobs=1): err= 0: pid=910369: Mon Jul 15 13:16:39 2024 00:31:17.680 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(131MiB/5040msec) 00:31:17.680 slat (nsec): min=7902, max=50511, avg=8669.87, stdev=1848.33 00:31:17.680 clat (usec): min=5716, max=92537, avg=14438.43, stdev=13444.79 00:31:17.680 lat (usec): min=5724, max=92545, avg=14447.10, stdev=13444.81 00:31:17.680 clat percentiles (usec): 00:31:17.680 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 8029], 00:31:17.680 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10421], 60.00th=[11076], 00:31:17.680 | 70.00th=[11731], 80.00th=[12518], 90.00th=[47973], 95.00th=[50594], 00:31:17.680 | 99.00th=[53216], 99.50th=[53740], 99.90th=[90702], 99.95th=[92799], 00:31:17.680 | 99.99th=[92799] 00:31:17.680 bw ( KiB/s): min=21504, max=32512, per=34.54%, avg=26700.80, stdev=3199.83, samples=10 00:31:17.680 iops : min= 168, max= 254, avg=208.60, stdev=25.00, samples=10 00:31:17.680 lat (msec) : 10=45.70%, 20=43.12%, 50=5.35%, 100=5.83% 00:31:17.680 cpu : usr=95.99%, sys=3.71%, ctx=8, majf=0, minf=113 00:31:17.680 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.680 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.680 filename0: (groupid=0, jobs=1): err= 0: pid=910370: Mon Jul 15 13:16:39 2024 00:31:17.680 read: IOPS=224, BW=28.1MiB/s (29.5MB/s)(141MiB/5007msec) 00:31:17.680 slat (nsec): min=5514, max=44392, avg=7675.42, stdev=1987.64 00:31:17.680 clat (usec): min=4606, max=92455, avg=13327.63, stdev=14230.93 00:31:17.680 lat (usec): min=4615, max=92467, avg=13335.31, stdev=14231.17 00:31:17.680 clat percentiles (usec): 00:31:17.680 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5604], 20.00th=[ 6587], 00:31:17.680 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9503], 00:31:17.680 | 70.00th=[10290], 80.00th=[11338], 90.00th=[47973], 95.00th=[50070], 00:31:17.680 | 99.00th=[52691], 99.50th=[89654], 99.90th=[91751], 99.95th=[92799], 00:31:17.680 | 99.99th=[92799] 00:31:17.680 bw ( KiB/s): min=10752, max=40192, per=37.19%, avg=28748.80, stdev=9252.31, samples=10 00:31:17.680 iops : min= 84, max= 314, avg=224.60, stdev=72.28, samples=10 00:31:17.680 lat (msec) : 10=66.07%, 20=22.47%, 50=6.31%, 100=5.15% 00:31:17.680 cpu : usr=96.60%, sys=3.16%, ctx=15, majf=0, minf=81 00:31:17.680 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.680 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.680 filename0: (groupid=0, jobs=1): err= 0: pid=910372: Mon Jul 15 13:16:39 2024 00:31:17.680 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(109MiB/5004msec) 00:31:17.680 slat (nsec): min=5528, max=83426, avg=8907.72, stdev=4372.53 00:31:17.680 clat (usec): min=5341, max=93920, avg=17199.27, stdev=16160.27 00:31:17.680 lat (usec): min=5347, max=93933, avg=17208.17, stdev=16160.59 00:31:17.680 clat percentiles (usec): 00:31:17.680 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8455], 00:31:17.680 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[12387], 00:31:17.680 | 70.00th=[13304], 80.00th=[14746], 90.00th=[50070], 95.00th=[53216], 00:31:17.680 | 99.00th=[56886], 99.50th=[90702], 99.90th=[93848], 99.95th=[93848], 00:31:17.680 | 99.99th=[93848] 00:31:17.680 bw ( KiB/s): min=18944, max=28928, per=28.81%, avg=22272.00, stdev=3251.64, samples=10 00:31:17.680 iops : min= 148, max= 226, avg=174.00, stdev=25.40, samples=10 00:31:17.680 lat (msec) : 10=42.66%, 20=41.51%, 50=4.93%, 100=10.89% 00:31:17.680 cpu : usr=92.96%, sys=5.34%, ctx=73, majf=0, minf=121 00:31:17.680 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.680 issued rwts: total=872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.680 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.680 00:31:17.680 Run status group 0 (all jobs): 00:31:17.680 READ: bw=75.5MiB/s (79.2MB/s), 21.8MiB/s-28.1MiB/s (22.8MB/s-29.5MB/s), io=381MiB (399MB), run=5004-5040msec 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:17.941 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 bdev_null0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 [2024-07-15 13:16:39.688104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 bdev_null1 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 bdev_null2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.942 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.204 { 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme$subsystem", 00:31:18.204 "trtype": "$TEST_TRANSPORT", 00:31:18.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "$NVMF_PORT", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.204 "hdgst": ${hdgst:-false}, 00:31:18.204 "ddgst": ${ddgst:-false} 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 } 00:31:18.204 EOF 00:31:18.204 )") 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.204 { 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme$subsystem", 00:31:18.204 "trtype": "$TEST_TRANSPORT", 00:31:18.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "$NVMF_PORT", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.204 "hdgst": ${hdgst:-false}, 00:31:18.204 "ddgst": ${ddgst:-false} 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 } 00:31:18.204 EOF 00:31:18.204 )") 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.204 { 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme$subsystem", 00:31:18.204 "trtype": "$TEST_TRANSPORT", 00:31:18.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "$NVMF_PORT", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.204 "hdgst": ${hdgst:-false}, 00:31:18.204 "ddgst": ${ddgst:-false} 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 } 00:31:18.204 EOF 00:31:18.204 )") 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme0", 00:31:18.204 "trtype": "tcp", 00:31:18.204 "traddr": "10.0.0.2", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "4420", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.204 "hdgst": false, 00:31:18.204 "ddgst": false 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 },{ 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme1", 00:31:18.204 "trtype": "tcp", 00:31:18.204 "traddr": "10.0.0.2", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "4420", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:18.204 "hdgst": false, 00:31:18.204 "ddgst": false 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 },{ 00:31:18.204 "params": { 00:31:18.204 "name": "Nvme2", 00:31:18.204 "trtype": "tcp", 00:31:18.204 "traddr": "10.0.0.2", 00:31:18.204 "adrfam": "ipv4", 00:31:18.204 "trsvcid": "4420", 00:31:18.204 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:18.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:18.204 "hdgst": false, 00:31:18.204 "ddgst": false 00:31:18.204 }, 00:31:18.204 "method": "bdev_nvme_attach_controller" 00:31:18.204 }' 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.204 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:18.205 13:16:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.466 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:18.466 ... 00:31:18.466 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:18.466 ... 00:31:18.466 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:18.466 ... 00:31:18.466 fio-3.35 00:31:18.466 Starting 24 threads 00:31:18.466 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.686 00:31:30.686 filename0: (groupid=0, jobs=1): err= 0: pid=911692: Mon Jul 15 13:16:51 2024 00:31:30.686 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10015msec) 00:31:30.686 slat (nsec): min=5634, max=80817, avg=13807.57, stdev=11364.63 00:31:30.686 clat (usec): min=18332, max=39355, avg=32083.83, stdev=1551.08 00:31:30.686 lat (usec): min=18338, max=39379, avg=32097.64, stdev=1550.92 00:31:30.686 clat percentiles (usec): 00:31:30.686 | 1.00th=[23987], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:31:30.686 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:30.686 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:30.686 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39060], 99.95th=[39584], 00:31:30.686 | 99.99th=[39584] 00:31:30.686 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=1987.53, stdev=78.17, samples=19 00:31:30.686 iops : min= 480, max= 544, avg=496.84, stdev=19.58, samples=19 00:31:30.686 lat (msec) : 20=0.64%, 50=99.36% 00:31:30.686 cpu : usr=97.62%, sys=1.19%, ctx=74, majf=0, minf=68 00:31:30.686 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:30.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.686 filename0: (groupid=0, jobs=1): err= 0: pid=911693: Mon Jul 15 13:16:51 2024 00:31:30.686 read: IOPS=511, BW=2048KiB/s (2097kB/s)(20.0MiB/10005msec) 00:31:30.686 slat (nsec): min=5570, max=83365, avg=14240.66, stdev=11706.67 00:31:30.686 clat (usec): min=14525, max=59976, avg=31142.80, stdev=4251.68 00:31:30.686 lat (usec): min=14532, max=59982, avg=31157.04, stdev=4253.48 00:31:30.686 clat percentiles (usec): 00:31:30.686 | 1.00th=[19530], 5.00th=[21627], 10.00th=[24249], 20.00th=[31327], 00:31:30.686 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.686 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.686 | 99.00th=[45876], 99.50th=[49021], 99.90th=[60031], 99.95th=[60031], 00:31:30.686 | 99.99th=[60031] 00:31:30.686 bw ( KiB/s): min= 1916, max= 2400, per=4.29%, avg=2052.89, stdev=135.04, samples=19 00:31:30.686 iops : min= 479, max= 600, avg=513.11, stdev=33.86, samples=19 00:31:30.686 lat (msec) : 20=1.33%, 50=98.32%, 100=0.35% 00:31:30.686 cpu : usr=99.10%, sys=0.60%, ctx=23, majf=0, minf=71 00:31:30.686 IO depths : 1=3.7%, 2=7.4%, 4=20.9%, 8=59.0%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:30.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 issued rwts: total=5122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.686 filename0: (groupid=0, jobs=1): err= 0: pid=911694: Mon Jul 15 13:16:51 2024 00:31:30.686 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10023msec) 00:31:30.686 slat (nsec): min=5592, max=73549, avg=10898.95, stdev=8066.39 00:31:30.686 clat (usec): min=4255, max=34856, avg=31829.34, stdev=2898.10 00:31:30.686 lat (usec): min=4268, max=34864, avg=31840.24, stdev=2897.68 00:31:30.686 clat percentiles (usec): 00:31:30.686 | 1.00th=[17433], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.686 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:30.686 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.686 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:30.686 | 99.99th=[34866] 00:31:30.686 bw ( KiB/s): min= 1916, max= 2304, per=4.19%, avg=2007.11, stdev=96.00, samples=19 00:31:30.686 iops : min= 479, max= 576, avg=501.74, stdev=23.99, samples=19 00:31:30.686 lat (msec) : 10=0.88%, 20=0.68%, 50=98.45% 00:31:30.686 cpu : usr=99.14%, sys=0.51%, ctx=92, majf=0, minf=73 00:31:30.686 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:30.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.686 filename0: (groupid=0, jobs=1): err= 0: pid=911695: Mon Jul 15 13:16:51 2024 00:31:30.686 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10001msec) 00:31:30.686 slat (nsec): min=5581, max=85765, avg=16699.51, stdev=12535.47 00:31:30.686 clat (usec): min=14386, max=54331, avg=31681.32, stdev=4254.23 00:31:30.686 lat (usec): min=14395, max=54344, avg=31698.02, stdev=4255.60 00:31:30.686 clat percentiles (usec): 00:31:30.686 | 1.00th=[20055], 5.00th=[22414], 10.00th=[26346], 20.00th=[31589], 00:31:30.686 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.686 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[38011], 00:31:30.686 | 99.00th=[47449], 99.50th=[49021], 99.90th=[52167], 99.95th=[52167], 00:31:30.686 | 99.99th=[54264] 00:31:30.686 bw ( KiB/s): min= 1920, max= 2224, per=4.20%, avg=2013.05, stdev=83.53, samples=19 00:31:30.686 iops : min= 480, max= 556, avg=503.26, stdev=20.88, samples=19 00:31:30.686 lat (msec) : 20=0.99%, 50=98.67%, 100=0.34% 00:31:30.686 cpu : usr=99.12%, sys=0.57%, ctx=14, majf=0, minf=97 00:31:30.686 IO depths : 1=3.0%, 2=6.0%, 4=13.7%, 8=66.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:30.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 complete : 0=0.0%, 4=91.4%, 8=4.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.686 issued rwts: total=5033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.686 filename0: (groupid=0, jobs=1): err= 0: pid=911697: Mon Jul 15 13:16:51 2024 00:31:30.686 read: IOPS=517, BW=2068KiB/s (2118kB/s)(20.2MiB/10018msec) 00:31:30.686 slat (nsec): min=5568, max=66268, avg=8053.50, stdev=5310.48 00:31:30.686 clat (usec): min=2708, max=46052, avg=30856.03, stdev=4614.65 00:31:30.686 lat (usec): min=2727, max=46058, avg=30864.08, stdev=4614.19 00:31:30.686 clat percentiles (usec): 00:31:30.686 | 1.00th=[ 5932], 5.00th=[21103], 10.00th=[24511], 20.00th=[31589], 00:31:30.686 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.686 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:31:30.686 | 99.00th=[34341], 99.50th=[34866], 99.90th=[42730], 99.95th=[44303], 00:31:30.686 | 99.99th=[45876] 00:31:30.686 bw ( KiB/s): min= 1916, max= 2560, per=4.30%, avg=2057.05, stdev=152.77, samples=19 00:31:30.686 iops : min= 479, max= 640, avg=514.26, stdev=38.19, samples=19 00:31:30.686 lat (msec) : 4=0.50%, 10=1.04%, 20=1.85%, 50=96.60% 00:31:30.686 cpu : usr=98.95%, sys=0.76%, ctx=17, majf=0, minf=85 00:31:30.687 IO depths : 1=5.6%, 2=11.4%, 4=23.7%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=93.7%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=5180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename0: (groupid=0, jobs=1): err= 0: pid=911698: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=516, BW=2065KiB/s (2115kB/s)(20.2MiB/10009msec) 00:31:30.687 slat (nsec): min=2770, max=79998, avg=10771.52, stdev=8490.89 00:31:30.687 clat (usec): min=2828, max=34677, avg=30891.66, stdev=4391.67 00:31:30.687 lat (usec): min=2838, max=34684, avg=30902.44, stdev=4392.59 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[ 5538], 5.00th=[21365], 10.00th=[25035], 20.00th=[31589], 00:31:30.687 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.687 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:30.687 | 99.99th=[34866] 00:31:30.687 bw ( KiB/s): min= 1920, max= 2944, per=4.32%, avg=2067.95, stdev=242.38, samples=19 00:31:30.687 iops : min= 480, max= 736, avg=516.95, stdev=60.60, samples=19 00:31:30.687 lat (msec) : 4=0.43%, 10=0.81%, 20=3.35%, 50=95.41% 00:31:30.687 cpu : usr=99.22%, sys=0.47%, ctx=30, majf=0, minf=74 00:31:30.687 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename0: (groupid=0, jobs=1): err= 0: pid=911699: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.5MiB/10002msec) 00:31:30.687 slat (nsec): min=5578, max=81132, avg=14649.87, stdev=10919.84 00:31:30.687 clat (usec): min=15436, max=65290, avg=31867.20, stdev=3907.34 00:31:30.687 lat (usec): min=15444, max=65306, avg=31881.85, stdev=3907.52 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[20317], 5.00th=[23462], 10.00th=[30802], 20.00th=[31589], 00:31:30.687 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.687 | 99.00th=[50070], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:31:30.687 | 99.99th=[65274] 00:31:30.687 bw ( KiB/s): min= 1792, max= 2192, per=4.17%, avg=1998.32, stdev=92.97, samples=19 00:31:30.687 iops : min= 448, max= 548, avg=499.58, stdev=23.24, samples=19 00:31:30.687 lat (msec) : 20=0.82%, 50=98.06%, 100=1.12% 00:31:30.687 cpu : usr=98.30%, sys=0.93%, ctx=52, majf=0, minf=46 00:31:30.687 IO depths : 1=4.5%, 2=9.7%, 4=21.5%, 8=56.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename0: (groupid=0, jobs=1): err= 0: pid=911700: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10009msec) 00:31:30.687 slat (nsec): min=5598, max=81521, avg=16639.33, stdev=11596.24 00:31:30.687 clat (usec): min=21648, max=42098, avg=32115.65, stdev=956.63 00:31:30.687 lat (usec): min=21657, max=42125, avg=32132.29, stdev=956.78 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:30.687 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.687 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[41157], 00:31:30.687 | 99.99th=[42206] 00:31:30.687 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.63, stdev=65.66, samples=19 00:31:30.687 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:31:30.687 lat (msec) : 50=100.00% 00:31:30.687 cpu : usr=99.13%, sys=0.58%, ctx=13, majf=0, minf=45 00:31:30.687 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename1: (groupid=0, jobs=1): err= 0: pid=911701: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10017msec) 00:31:30.687 slat (nsec): min=5440, max=85388, avg=15422.76, stdev=12909.94 00:31:30.687 clat (usec): min=20133, max=36255, avg=32073.70, stdev=1273.39 00:31:30.687 lat (usec): min=20139, max=36270, avg=32089.12, stdev=1273.35 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[24773], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.687 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.687 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:31:30.687 | 99.99th=[36439] 00:31:30.687 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1987.53, stdev=65.50, samples=19 00:31:30.687 iops : min= 480, max= 512, avg=496.84, stdev=16.42, samples=19 00:31:30.687 lat (msec) : 50=100.00% 00:31:30.687 cpu : usr=99.21%, sys=0.48%, ctx=45, majf=0, minf=53 00:31:30.687 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename1: (groupid=0, jobs=1): err= 0: pid=911702: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10005msec) 00:31:30.687 slat (nsec): min=5495, max=89874, avg=16109.78, stdev=13203.89 00:31:30.687 clat (usec): min=6934, max=78260, avg=32886.68, stdev=5119.94 00:31:30.687 lat (usec): min=6940, max=78285, avg=32902.79, stdev=5119.34 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[19792], 5.00th=[25822], 10.00th=[31065], 20.00th=[31851], 00:31:30.687 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:30.687 | 70.00th=[32637], 80.00th=[33162], 90.00th=[37487], 95.00th=[43254], 00:31:30.687 | 99.00th=[50070], 99.50th=[52691], 99.90th=[62653], 99.95th=[78119], 00:31:30.687 | 99.99th=[78119] 00:31:30.687 bw ( KiB/s): min= 1488, max= 2064, per=4.04%, avg=1932.63, stdev=134.46, samples=19 00:31:30.687 iops : min= 372, max= 516, avg=483.16, stdev=33.61, samples=19 00:31:30.687 lat (msec) : 10=0.25%, 20=0.82%, 50=97.73%, 100=1.19% 00:31:30.687 cpu : usr=99.12%, sys=0.58%, ctx=27, majf=0, minf=99 00:31:30.687 IO depths : 1=0.3%, 2=2.2%, 4=9.9%, 8=72.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=91.1%, 8=6.2%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename1: (groupid=0, jobs=1): err= 0: pid=911703: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=501, BW=2005KiB/s (2054kB/s)(19.6MiB/10021msec) 00:31:30.687 slat (nsec): min=5576, max=86223, avg=14836.99, stdev=11613.50 00:31:30.687 clat (usec): min=16656, max=54867, avg=31780.03, stdev=5180.76 00:31:30.687 lat (usec): min=16665, max=54910, avg=31794.87, stdev=5182.04 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[19792], 5.00th=[21627], 10.00th=[25297], 20.00th=[28705], 00:31:30.687 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32637], 80.00th=[33162], 90.00th=[37487], 95.00th=[42206], 00:31:30.687 | 99.00th=[47449], 99.50th=[49546], 99.90th=[52691], 99.95th=[54789], 00:31:30.687 | 99.99th=[54789] 00:31:30.687 bw ( KiB/s): min= 1664, max= 2288, per=4.21%, avg=2017.00, stdev=148.00, samples=19 00:31:30.687 iops : min= 416, max= 572, avg=504.21, stdev=37.06, samples=19 00:31:30.687 lat (msec) : 20=1.51%, 50=98.01%, 100=0.48% 00:31:30.687 cpu : usr=99.22%, sys=0.48%, ctx=15, majf=0, minf=53 00:31:30.687 IO depths : 1=2.9%, 2=5.9%, 4=14.3%, 8=66.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=91.5%, 8=4.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename1: (groupid=0, jobs=1): err= 0: pid=911704: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10013msec) 00:31:30.687 slat (nsec): min=5524, max=87976, avg=18228.54, stdev=13566.19 00:31:30.687 clat (usec): min=14052, max=47749, avg=32116.39, stdev=1832.50 00:31:30.687 lat (usec): min=14058, max=47772, avg=32134.62, stdev=1832.53 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[24249], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:30.687 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.687 | 99.00th=[34866], 99.50th=[42730], 99.90th=[47449], 99.95th=[47973], 00:31:30.687 | 99.99th=[47973] 00:31:30.687 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=77.69, samples=19 00:31:30.687 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:30.687 lat (msec) : 20=0.36%, 50=99.64% 00:31:30.687 cpu : usr=99.10%, sys=0.61%, ctx=31, majf=0, minf=54 00:31:30.687 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.687 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.687 filename1: (groupid=0, jobs=1): err= 0: pid=911706: Mon Jul 15 13:16:51 2024 00:31:30.687 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10006msec) 00:31:30.687 slat (nsec): min=5532, max=63818, avg=14386.88, stdev=9812.38 00:31:30.687 clat (usec): min=5851, max=63331, avg=31730.82, stdev=4137.96 00:31:30.687 lat (usec): min=5857, max=63350, avg=31745.20, stdev=4138.17 00:31:30.687 clat percentiles (usec): 00:31:30.687 | 1.00th=[19792], 5.00th=[23200], 10.00th=[30540], 20.00th=[31589], 00:31:30.687 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:30.687 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:31:30.687 | 99.00th=[46924], 99.50th=[51643], 99.90th=[63177], 99.95th=[63177], 00:31:30.687 | 99.99th=[63177] 00:31:30.687 bw ( KiB/s): min= 1792, max= 2160, per=4.18%, avg=2001.68, stdev=82.95, samples=19 00:31:30.687 iops : min= 448, max= 540, avg=500.42, stdev=20.74, samples=19 00:31:30.687 lat (msec) : 10=0.32%, 20=0.94%, 50=98.07%, 100=0.68% 00:31:30.687 cpu : usr=98.08%, sys=1.14%, ctx=479, majf=0, minf=65 00:31:30.687 IO depths : 1=3.4%, 2=8.4%, 4=20.8%, 8=57.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:30.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=93.2%, 8=1.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=5026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename1: (groupid=0, jobs=1): err= 0: pid=911707: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10008msec) 00:31:30.688 slat (nsec): min=5577, max=84465, avg=20339.34, stdev=14753.44 00:31:30.688 clat (usec): min=7400, max=71836, avg=32756.29, stdev=4221.48 00:31:30.688 lat (usec): min=7405, max=71853, avg=32776.63, stdev=4220.89 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[20841], 5.00th=[26870], 10.00th=[31327], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:30.688 | 70.00th=[32637], 80.00th=[33162], 90.00th=[35914], 95.00th=[41157], 00:31:30.688 | 99.00th=[49546], 99.50th=[50070], 99.90th=[71828], 99.95th=[71828], 00:31:30.688 | 99.99th=[71828] 00:31:30.688 bw ( KiB/s): min= 1792, max= 2096, per=4.04%, avg=1936.42, stdev=80.82, samples=19 00:31:30.688 iops : min= 448, max= 524, avg=484.11, stdev=20.20, samples=19 00:31:30.688 lat (msec) : 10=0.21%, 20=0.43%, 50=98.58%, 100=0.78% 00:31:30.688 cpu : usr=98.97%, sys=0.73%, ctx=13, majf=0, minf=66 00:31:30.688 IO depths : 1=0.6%, 2=1.3%, 4=7.7%, 8=77.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename1: (groupid=0, jobs=1): err= 0: pid=911708: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:31:30.688 slat (usec): min=5, max=191, avg=14.45, stdev=10.22 00:31:30.688 clat (usec): min=17299, max=45987, avg=32149.86, stdev=1700.66 00:31:30.688 lat (usec): min=17308, max=45996, avg=32164.31, stdev=1701.37 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[23462], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:30.688 | 99.00th=[39584], 99.50th=[41157], 99.90th=[45351], 99.95th=[45876], 00:31:30.688 | 99.99th=[45876] 00:31:30.688 bw ( KiB/s): min= 1907, max= 2052, per=4.14%, avg=1982.00, stdev=65.78, samples=19 00:31:30.688 iops : min= 476, max= 513, avg=495.26, stdev=16.60, samples=19 00:31:30.688 lat (msec) : 20=0.08%, 50=99.92% 00:31:30.688 cpu : usr=98.82%, sys=0.76%, ctx=133, majf=0, minf=55 00:31:30.688 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename1: (groupid=0, jobs=1): err= 0: pid=911709: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10005msec) 00:31:30.688 slat (nsec): min=5591, max=68895, avg=14248.31, stdev=10337.49 00:31:30.688 clat (usec): min=17292, max=57673, avg=32184.40, stdev=2094.25 00:31:30.688 lat (usec): min=17303, max=57680, avg=32198.65, stdev=2094.44 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[23200], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.688 | 99.00th=[39584], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:31:30.688 | 99.99th=[57934] 00:31:30.688 bw ( KiB/s): min= 1920, max= 2052, per=4.13%, avg=1979.47, stdev=62.03, samples=19 00:31:30.688 iops : min= 480, max= 513, avg=494.63, stdev=15.66, samples=19 00:31:30.688 lat (msec) : 20=0.08%, 50=99.54%, 100=0.38% 00:31:30.688 cpu : usr=98.98%, sys=0.67%, ctx=63, majf=0, minf=55 00:31:30.688 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename2: (groupid=0, jobs=1): err= 0: pid=911710: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=508, BW=2032KiB/s (2081kB/s)(19.9MiB/10006msec) 00:31:30.688 slat (nsec): min=5576, max=83592, avg=14397.73, stdev=10290.59 00:31:30.688 clat (usec): min=14279, max=66854, avg=31382.74, stdev=4108.35 00:31:30.688 lat (usec): min=14286, max=66878, avg=31397.14, stdev=4109.83 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[19792], 5.00th=[21890], 10.00th=[25560], 20.00th=[31327], 00:31:30.688 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[35390], 00:31:30.688 | 99.00th=[44303], 99.50th=[48497], 99.90th=[49546], 99.95th=[66847], 00:31:30.688 | 99.99th=[66847] 00:31:30.688 bw ( KiB/s): min= 1792, max= 2240, per=4.24%, avg=2030.32, stdev=109.84, samples=19 00:31:30.688 iops : min= 448, max= 560, avg=507.58, stdev=27.46, samples=19 00:31:30.688 lat (msec) : 20=1.12%, 50=98.82%, 100=0.06% 00:31:30.688 cpu : usr=98.82%, sys=0.80%, ctx=101, majf=0, minf=68 00:31:30.688 IO depths : 1=3.0%, 2=6.5%, 4=17.2%, 8=62.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=92.3%, 8=3.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=5084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename2: (groupid=0, jobs=1): err= 0: pid=911711: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10005msec) 00:31:30.688 slat (nsec): min=5555, max=81801, avg=12233.59, stdev=8909.00 00:31:30.688 clat (usec): min=17207, max=45580, avg=32067.83, stdev=1660.82 00:31:30.688 lat (usec): min=17213, max=45602, avg=32080.06, stdev=1661.15 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[21103], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:30.688 | 99.00th=[33817], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:31:30.688 | 99.99th=[45351] 00:31:30.688 bw ( KiB/s): min= 1920, max= 2052, per=4.15%, avg=1988.74, stdev=65.41, samples=19 00:31:30.688 iops : min= 480, max= 513, avg=496.95, stdev=16.53, samples=19 00:31:30.688 lat (msec) : 20=0.36%, 50=99.64% 00:31:30.688 cpu : usr=99.03%, sys=0.67%, ctx=15, majf=0, minf=56 00:31:30.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename2: (groupid=0, jobs=1): err= 0: pid=911712: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10019msec) 00:31:30.688 slat (nsec): min=5580, max=73366, avg=13680.94, stdev=9547.50 00:31:30.688 clat (usec): min=14933, max=52703, avg=31964.96, stdev=3422.46 00:31:30.688 lat (usec): min=14940, max=52714, avg=31978.64, stdev=3422.90 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[18744], 5.00th=[25560], 10.00th=[31065], 20.00th=[31589], 00:31:30.688 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[34341], 00:31:30.688 | 99.00th=[45351], 99.50th=[47973], 99.90th=[52691], 99.95th=[52691], 00:31:30.688 | 99.99th=[52691] 00:31:30.688 bw ( KiB/s): min= 1896, max= 2096, per=4.16%, avg=1990.05, stdev=71.19, samples=19 00:31:30.688 iops : min= 474, max= 524, avg=497.47, stdev=17.84, samples=19 00:31:30.688 lat (msec) : 20=1.64%, 50=98.10%, 100=0.26% 00:31:30.688 cpu : usr=99.09%, sys=0.62%, ctx=11, majf=0, minf=56 00:31:30.688 IO depths : 1=4.0%, 2=8.9%, 4=21.0%, 8=57.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=93.2%, 8=1.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename2: (groupid=0, jobs=1): err= 0: pid=911713: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.4MiB/10009msec) 00:31:30.688 slat (nsec): min=5545, max=79859, avg=17476.56, stdev=11830.27 00:31:30.688 clat (usec): min=24318, max=35434, avg=32132.78, stdev=870.53 00:31:30.688 lat (usec): min=24325, max=35454, avg=32150.26, stdev=870.47 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.688 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:30.688 | 99.99th=[35390] 00:31:30.688 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.63, stdev=65.66, samples=19 00:31:30.688 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:31:30.688 lat (msec) : 50=100.00% 00:31:30.688 cpu : usr=98.77%, sys=0.74%, ctx=31, majf=0, minf=62 00:31:30.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.688 filename2: (groupid=0, jobs=1): err= 0: pid=911714: Mon Jul 15 13:16:51 2024 00:31:30.688 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:31:30.688 slat (nsec): min=5630, max=59197, avg=13937.28, stdev=8600.87 00:31:30.688 clat (usec): min=5779, max=76741, avg=32151.90, stdev=2595.22 00:31:30.688 lat (usec): min=5791, max=76764, avg=32165.84, stdev=2595.09 00:31:30.688 clat percentiles (usec): 00:31:30.688 | 1.00th=[30016], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:30.688 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:30.688 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.688 | 99.00th=[34341], 99.50th=[34866], 99.90th=[62653], 99.95th=[62653], 00:31:30.688 | 99.99th=[77071] 00:31:30.688 bw ( KiB/s): min= 1795, max= 2048, per=4.12%, avg=1974.05, stdev=77.30, samples=19 00:31:30.688 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:31:30.688 lat (msec) : 10=0.32%, 50=99.35%, 100=0.32% 00:31:30.688 cpu : usr=99.20%, sys=0.51%, ctx=9, majf=0, minf=72 00:31:30.688 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:30.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.688 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.689 filename2: (groupid=0, jobs=1): err= 0: pid=911715: Mon Jul 15 13:16:51 2024 00:31:30.689 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10006msec) 00:31:30.689 slat (nsec): min=5582, max=62374, avg=14809.93, stdev=9737.50 00:31:30.689 clat (usec): min=9618, max=63303, avg=32114.91, stdev=3194.84 00:31:30.689 lat (usec): min=9624, max=63322, avg=32129.72, stdev=3194.99 00:31:30.689 clat percentiles (usec): 00:31:30.689 | 1.00th=[21365], 5.00th=[30802], 10.00th=[31327], 20.00th=[31851], 00:31:30.689 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.689 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:30.689 | 99.00th=[48497], 99.50th=[50594], 99.90th=[63177], 99.95th=[63177], 00:31:30.689 | 99.99th=[63177] 00:31:30.689 bw ( KiB/s): min= 1792, max= 2080, per=4.13%, avg=1977.26, stdev=75.30, samples=19 00:31:30.689 iops : min= 448, max= 520, avg=494.32, stdev=18.82, samples=19 00:31:30.689 lat (msec) : 10=0.28%, 20=0.24%, 50=98.95%, 100=0.52% 00:31:30.689 cpu : usr=99.19%, sys=0.49%, ctx=60, majf=0, minf=63 00:31:30.689 IO depths : 1=3.7%, 2=9.6%, 4=23.8%, 8=53.9%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:30.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.689 filename2: (groupid=0, jobs=1): err= 0: pid=911716: Mon Jul 15 13:16:51 2024 00:31:30.689 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.4MiB/10020msec) 00:31:30.689 slat (nsec): min=5591, max=88303, avg=18891.48, stdev=13895.14 00:31:30.689 clat (usec): min=17448, max=45992, avg=32049.31, stdev=1825.00 00:31:30.689 lat (usec): min=17454, max=46014, avg=32068.20, stdev=1825.26 00:31:30.689 clat percentiles (usec): 00:31:30.689 | 1.00th=[23200], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:31:30.689 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.689 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:31:30.689 | 99.00th=[38536], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:31:30.689 | 99.99th=[45876] 00:31:30.689 bw ( KiB/s): min= 1920, max= 2052, per=4.14%, avg=1984.85, stdev=64.18, samples=20 00:31:30.689 iops : min= 480, max= 513, avg=496.10, stdev=16.16, samples=20 00:31:30.689 lat (msec) : 20=0.20%, 50=99.80% 00:31:30.689 cpu : usr=99.03%, sys=0.65%, ctx=66, majf=0, minf=73 00:31:30.689 IO depths : 1=5.1%, 2=11.2%, 4=24.9%, 8=51.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:30.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.689 filename2: (groupid=0, jobs=1): err= 0: pid=911718: Mon Jul 15 13:16:51 2024 00:31:30.689 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10010msec) 00:31:30.689 slat (nsec): min=5454, max=71018, avg=15082.60, stdev=10449.01 00:31:30.689 clat (usec): min=13118, max=67001, avg=32235.25, stdev=4043.39 00:31:30.689 lat (usec): min=13124, max=67019, avg=32250.33, stdev=4043.38 00:31:30.689 clat percentiles (usec): 00:31:30.689 | 1.00th=[19792], 5.00th=[26346], 10.00th=[31065], 20.00th=[31589], 00:31:30.689 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:30.689 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[34866], 00:31:30.689 | 99.00th=[49546], 99.50th=[53740], 99.90th=[63177], 99.95th=[66847], 00:31:30.689 | 99.99th=[66847] 00:31:30.689 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1973.89, stdev=70.49, samples=19 00:31:30.689 iops : min= 448, max= 512, avg=493.47, stdev=17.62, samples=19 00:31:30.689 lat (msec) : 20=1.41%, 50=97.62%, 100=0.97% 00:31:30.689 cpu : usr=99.10%, sys=0.60%, ctx=11, majf=0, minf=56 00:31:30.689 IO depths : 1=3.1%, 2=7.5%, 4=22.6%, 8=57.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:30.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.689 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:30.689 00:31:30.689 Run status group 0 (all jobs): 00:31:30.689 READ: bw=46.8MiB/s (49.0MB/s), 1941KiB/s-2068KiB/s (1987kB/s-2118kB/s), io=469MiB (491MB), run=10001-10023msec 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 bdev_null0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 [2024-07-15 13:16:51.495969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 bdev_null1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.689 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.690 { 00:31:30.690 "params": { 00:31:30.690 "name": "Nvme$subsystem", 00:31:30.690 "trtype": "$TEST_TRANSPORT", 00:31:30.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.690 "adrfam": "ipv4", 00:31:30.690 "trsvcid": "$NVMF_PORT", 00:31:30.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.690 "hdgst": ${hdgst:-false}, 00:31:30.690 "ddgst": ${ddgst:-false} 00:31:30.690 }, 00:31:30.690 "method": "bdev_nvme_attach_controller" 00:31:30.690 } 00:31:30.690 EOF 00:31:30.690 )") 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.690 { 00:31:30.690 "params": { 00:31:30.690 "name": "Nvme$subsystem", 00:31:30.690 "trtype": "$TEST_TRANSPORT", 00:31:30.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.690 "adrfam": "ipv4", 00:31:30.690 "trsvcid": "$NVMF_PORT", 00:31:30.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.690 "hdgst": ${hdgst:-false}, 00:31:30.690 "ddgst": ${ddgst:-false} 00:31:30.690 }, 00:31:30.690 "method": "bdev_nvme_attach_controller" 00:31:30.690 } 00:31:30.690 EOF 00:31:30.690 )") 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.690 "params": { 00:31:30.690 "name": "Nvme0", 00:31:30.690 "trtype": "tcp", 00:31:30.690 "traddr": "10.0.0.2", 00:31:30.690 "adrfam": "ipv4", 00:31:30.690 "trsvcid": "4420", 00:31:30.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.690 "hdgst": false, 00:31:30.690 "ddgst": false 00:31:30.690 }, 00:31:30.690 "method": "bdev_nvme_attach_controller" 00:31:30.690 },{ 00:31:30.690 "params": { 00:31:30.690 "name": "Nvme1", 00:31:30.690 "trtype": "tcp", 00:31:30.690 "traddr": "10.0.0.2", 00:31:30.690 "adrfam": "ipv4", 00:31:30.690 "trsvcid": "4420", 00:31:30.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.690 "hdgst": false, 00:31:30.690 "ddgst": false 00:31:30.690 }, 00:31:30.690 "method": "bdev_nvme_attach_controller" 00:31:30.690 }' 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.690 13:16:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.690 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:30.690 ... 00:31:30.690 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:30.690 ... 00:31:30.690 fio-3.35 00:31:30.690 Starting 4 threads 00:31:30.690 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.054 00:31:36.054 filename0: (groupid=0, jobs=1): err= 0: pid=914134: Mon Jul 15 13:16:57 2024 00:31:36.054 read: IOPS=2209, BW=17.3MiB/s (18.1MB/s)(86.4MiB/5003msec) 00:31:36.054 slat (nsec): min=5395, max=36911, avg=5889.53, stdev=1219.64 00:31:36.054 clat (usec): min=1727, max=43611, avg=3605.03, stdev=1224.28 00:31:36.054 lat (usec): min=1732, max=43647, avg=3610.92, stdev=1224.49 00:31:36.054 clat percentiles (usec): 00:31:36.054 | 1.00th=[ 2507], 5.00th=[ 2802], 10.00th=[ 2999], 20.00th=[ 3195], 00:31:36.054 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3556], 00:31:36.054 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 4293], 95.00th=[ 5080], 00:31:36.054 | 99.00th=[ 5473], 99.50th=[ 5735], 99.90th=[ 6325], 99.95th=[43779], 00:31:36.054 | 99.99th=[43779] 00:31:36.054 bw ( KiB/s): min=16320, max=18432, per=26.16%, avg=17674.67, stdev=636.14, samples=9 00:31:36.054 iops : min= 2040, max= 2304, avg=2209.33, stdev=79.52, samples=9 00:31:36.054 lat (msec) : 2=0.08%, 4=87.15%, 10=12.70%, 50=0.07% 00:31:36.054 cpu : usr=97.72%, sys=2.02%, ctx=8, majf=0, minf=0 00:31:36.054 IO depths : 1=0.1%, 2=0.7%, 4=68.6%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.054 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.054 issued rwts: total=11055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.054 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:36.054 filename0: (groupid=0, jobs=1): err= 0: pid=914135: Mon Jul 15 13:16:57 2024 00:31:36.054 read: IOPS=2065, BW=16.1MiB/s (16.9MB/s)(80.7MiB/5002msec) 00:31:36.054 slat (nsec): min=5408, max=35305, avg=7137.04, stdev=2294.26 00:31:36.054 clat (usec): min=1374, max=47295, avg=3852.68, stdev=1382.53 00:31:36.054 lat (usec): min=1379, max=47324, avg=3859.81, stdev=1382.67 00:31:36.054 clat percentiles (usec): 00:31:36.054 | 1.00th=[ 2868], 5.00th=[ 3163], 10.00th=[ 3261], 20.00th=[ 3392], 00:31:36.054 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3687], 00:31:36.054 | 70.00th=[ 3752], 80.00th=[ 4015], 90.00th=[ 5145], 95.00th=[ 5407], 00:31:36.054 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[47449], 00:31:36.054 | 99.99th=[47449] 00:31:36.054 bw ( KiB/s): min=15216, max=17104, per=24.43%, avg=16508.44, stdev=545.34, samples=9 00:31:36.054 iops : min= 1902, max= 2138, avg=2063.56, stdev=68.17, samples=9 00:31:36.054 lat (msec) : 2=0.09%, 4=79.57%, 10=20.26%, 50=0.08% 00:31:36.054 cpu : usr=97.00%, sys=2.74%, ctx=9, majf=0, minf=9 00:31:36.054 IO depths : 1=0.1%, 2=0.3%, 4=72.1%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.054 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.054 issued rwts: total=10334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.054 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:36.054 filename1: (groupid=0, jobs=1): err= 0: pid=914136: Mon Jul 15 13:16:57 2024 00:31:36.054 read: IOPS=2098, BW=16.4MiB/s (17.2MB/s)(82.0MiB/5001msec) 00:31:36.054 slat (nsec): min=5406, max=55542, avg=7426.53, stdev=2650.64 00:31:36.055 clat (usec): min=1440, max=6306, avg=3792.52, stdev=666.82 00:31:36.055 lat (usec): min=1448, max=6314, avg=3799.95, stdev=666.60 00:31:36.055 clat percentiles (usec): 00:31:36.055 | 1.00th=[ 2180], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3392], 00:31:36.055 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3654], 60.00th=[ 3720], 00:31:36.055 | 70.00th=[ 3785], 80.00th=[ 4047], 90.00th=[ 5080], 95.00th=[ 5342], 00:31:36.055 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6128], 99.95th=[ 6259], 00:31:36.055 | 99.99th=[ 6325] 00:31:36.055 bw ( KiB/s): min=16240, max=17763, per=24.84%, avg=16786.11, stdev=448.35, samples=9 00:31:36.055 iops : min= 2030, max= 2220, avg=2098.22, stdev=55.94, samples=9 00:31:36.055 lat (msec) : 2=0.79%, 4=77.80%, 10=21.41% 00:31:36.055 cpu : usr=96.72%, sys=3.00%, ctx=19, majf=0, minf=9 00:31:36.055 IO depths : 1=0.1%, 2=0.4%, 4=71.6%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.055 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.055 issued rwts: total=10494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:36.055 filename1: (groupid=0, jobs=1): err= 0: pid=914137: Mon Jul 15 13:16:57 2024 00:31:36.055 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5004msec) 00:31:36.055 slat (usec): min=5, max=151, avg= 7.74, stdev= 3.02 00:31:36.055 clat (usec): min=1457, max=7022, avg=3835.25, stdev=668.31 00:31:36.055 lat (usec): min=1481, max=7028, avg=3842.99, stdev=668.17 00:31:36.055 clat percentiles (usec): 00:31:36.055 | 1.00th=[ 2507], 5.00th=[ 3097], 10.00th=[ 3261], 20.00th=[ 3392], 00:31:36.055 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:31:36.055 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 5014], 95.00th=[ 5342], 00:31:36.055 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6259], 99.95th=[ 6652], 00:31:36.055 | 99.99th=[ 7046] 00:31:36.055 bw ( KiB/s): min=15712, max=17312, per=24.57%, avg=16603.20, stdev=547.75, samples=10 00:31:36.055 iops : min= 1964, max= 2164, avg=2075.40, stdev=68.47, samples=10 00:31:36.055 lat (msec) : 2=0.38%, 4=73.69%, 10=25.94% 00:31:36.055 cpu : usr=96.92%, sys=2.82%, ctx=8, majf=0, minf=0 00:31:36.055 IO depths : 1=0.1%, 2=0.4%, 4=70.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.055 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.055 issued rwts: total=10382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.055 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:36.055 00:31:36.055 Run status group 0 (all jobs): 00:31:36.055 READ: bw=66.0MiB/s (69.2MB/s), 16.1MiB/s-17.3MiB/s (16.9MB/s-18.1MB/s), io=330MiB (346MB), run=5001-5004msec 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 00:31:36.055 real 0m24.273s 00:31:36.055 user 5m17.563s 00:31:36.055 sys 0m3.857s 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 ************************************ 00:31:36.055 END TEST fio_dif_rand_params 00:31:36.055 ************************************ 00:31:36.055 13:16:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:36.055 13:16:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:36.055 13:16:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.055 13:16:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 ************************************ 00:31:36.055 START TEST fio_dif_digest 00:31:36.055 ************************************ 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 bdev_null0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:36.055 [2024-07-15 13:16:57.865241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:36.055 { 00:31:36.055 "params": { 00:31:36.055 "name": "Nvme$subsystem", 00:31:36.055 "trtype": "$TEST_TRANSPORT", 00:31:36.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:36.055 "adrfam": "ipv4", 00:31:36.055 "trsvcid": "$NVMF_PORT", 00:31:36.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:36.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:36.055 "hdgst": ${hdgst:-false}, 00:31:36.055 "ddgst": ${ddgst:-false} 00:31:36.055 }, 00:31:36.055 "method": "bdev_nvme_attach_controller" 00:31:36.055 } 00:31:36.055 EOF 00:31:36.055 )") 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.055 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:36.315 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.315 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:36.316 "params": { 00:31:36.316 "name": "Nvme0", 00:31:36.316 "trtype": "tcp", 00:31:36.316 "traddr": "10.0.0.2", 00:31:36.316 "adrfam": "ipv4", 00:31:36.316 "trsvcid": "4420", 00:31:36.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:36.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:36.316 "hdgst": true, 00:31:36.316 "ddgst": true 00:31:36.316 }, 00:31:36.316 "method": "bdev_nvme_attach_controller" 00:31:36.316 }' 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:36.316 13:16:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.576 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:36.576 ... 00:31:36.576 fio-3.35 00:31:36.576 Starting 3 threads 00:31:36.576 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.809 00:31:48.809 filename0: (groupid=0, jobs=1): err= 0: pid=915377: Mon Jul 15 13:17:08 2024 00:31:48.809 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(288MiB/10007msec) 00:31:48.809 slat (nsec): min=5677, max=31883, avg=6440.69, stdev=950.24 00:31:48.809 clat (usec): min=7335, max=55522, avg=13024.79, stdev=3564.99 00:31:48.809 lat (usec): min=7341, max=55528, avg=13031.23, stdev=3565.00 00:31:48.809 clat percentiles (usec): 00:31:48.809 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[11076], 20.00th=[11863], 00:31:48.809 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:31:48.809 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:31:48.809 | 99.00th=[15795], 99.50th=[53216], 99.90th=[54789], 99.95th=[54789], 00:31:48.809 | 99.99th=[55313] 00:31:48.809 bw ( KiB/s): min=26112, max=33024, per=36.01%, avg=29452.80, stdev=1634.93, samples=20 00:31:48.809 iops : min= 204, max= 258, avg=230.10, stdev=12.77, samples=20 00:31:48.809 lat (msec) : 10=4.73%, 20=94.62%, 100=0.65% 00:31:48.809 cpu : usr=95.56%, sys=4.21%, ctx=17, majf=0, minf=139 00:31:48.809 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.809 filename0: (groupid=0, jobs=1): err= 0: pid=915378: Mon Jul 15 13:17:08 2024 00:31:48.809 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10046msec) 00:31:48.809 slat (nsec): min=5643, max=31945, avg=7930.67, stdev=1581.74 00:31:48.809 clat (usec): min=8706, max=56950, avg=14584.84, stdev=4526.34 00:31:48.809 lat (usec): min=8716, max=56957, avg=14592.77, stdev=4526.39 00:31:48.809 clat percentiles (usec): 00:31:48.809 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[12256], 20.00th=[13173], 00:31:48.809 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:31:48.809 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16057], 95.00th=[16450], 00:31:48.809 | 99.00th=[51643], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:31:48.809 | 99.99th=[56886] 00:31:48.809 bw ( KiB/s): min=22528, max=28928, per=32.22%, avg=26355.20, stdev=1715.24, samples=20 00:31:48.809 iops : min= 176, max= 226, avg=205.90, stdev=13.40, samples=20 00:31:48.809 lat (msec) : 10=2.62%, 20=96.27%, 50=0.05%, 100=1.07% 00:31:48.809 cpu : usr=96.11%, sys=3.65%, ctx=26, majf=0, minf=98 00:31:48.809 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.809 filename0: (groupid=0, jobs=1): err= 0: pid=915379: Mon Jul 15 13:17:08 2024 00:31:48.809 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10007msec) 00:31:48.809 slat (nsec): min=5641, max=33059, avg=6483.00, stdev=1097.92 00:31:48.809 clat (msec): min=7, max=132, avg=14.60, stdev= 5.83 00:31:48.809 lat (msec): min=7, max=132, avg=14.60, stdev= 5.83 00:31:48.809 clat percentiles (msec): 00:31:48.809 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:31:48.809 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:31:48.809 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 17], 00:31:48.809 | 99.00th=[ 55], 99.50th=[ 57], 99.90th=[ 93], 99.95th=[ 94], 00:31:48.809 | 99.99th=[ 133] 00:31:48.809 bw ( KiB/s): min=21504, max=29184, per=32.11%, avg=26265.60, stdev=2082.07, samples=20 00:31:48.809 iops : min= 168, max= 228, avg=205.20, stdev=16.27, samples=20 00:31:48.809 lat (msec) : 10=1.80%, 20=96.89%, 50=0.05%, 100=1.22%, 250=0.05% 00:31:48.809 cpu : usr=95.89%, sys=3.88%, ctx=14, majf=0, minf=178 00:31:48.809 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.809 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.809 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.809 00:31:48.809 Run status group 0 (all jobs): 00:31:48.809 READ: bw=79.9MiB/s (83.8MB/s), 25.7MiB/s-28.8MiB/s (26.9MB/s-30.2MB/s), io=803MiB (841MB), run=10007-10046msec 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.809 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.810 00:31:48.810 real 0m11.264s 00:31:48.810 user 0m42.675s 00:31:48.810 sys 0m1.485s 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.810 13:17:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.810 ************************************ 00:31:48.810 END TEST fio_dif_digest 00:31:48.810 ************************************ 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:48.810 13:17:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:48.810 13:17:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.810 rmmod nvme_tcp 00:31:48.810 rmmod nvme_fabrics 00:31:48.810 rmmod nvme_keyring 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 905182 ']' 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 905182 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 905182 ']' 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 905182 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 905182 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 905182' 00:31:48.810 killing process with pid 905182 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@967 -- # kill 905182 00:31:48.810 13:17:09 nvmf_dif -- common/autotest_common.sh@972 -- # wait 905182 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:48.810 13:17:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:51.375 Waiting for block devices as requested 00:31:51.375 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:51.375 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:51.636 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:51.636 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:51.636 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:51.896 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:51.896 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:51.896 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:52.157 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:52.157 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:52.157 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:52.417 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:52.417 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:52.417 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:52.417 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:52.678 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:52.678 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:52.678 13:17:14 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:52.678 13:17:14 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:52.678 13:17:14 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:52.678 13:17:14 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:52.678 13:17:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.678 13:17:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:52.678 13:17:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.225 13:17:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.225 00:31:55.225 real 1m18.972s 00:31:55.225 user 8m2.607s 00:31:55.225 sys 0m20.771s 00:31:55.225 13:17:16 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.225 13:17:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.225 ************************************ 00:31:55.225 END TEST nvmf_dif 00:31:55.225 ************************************ 00:31:55.225 13:17:16 -- common/autotest_common.sh@1142 -- # return 0 00:31:55.225 13:17:16 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.225 13:17:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:55.225 13:17:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.225 13:17:16 -- common/autotest_common.sh@10 -- # set +x 00:31:55.225 ************************************ 00:31:55.225 START TEST nvmf_abort_qd_sizes 00:31:55.225 ************************************ 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.225 * Looking for test storage... 00:31:55.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.225 13:17:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:03.367 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:03.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:03.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:03.368 Found net devices under 0000:31:00.0: cvl_0_0 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:03.368 Found net devices under 0000:31:00.1: cvl_0_1 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:03.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:32:03.368 00:32:03.368 --- 10.0.0.2 ping statistics --- 00:32:03.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.368 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:32:03.368 00:32:03.368 --- 10.0.0.1 ping statistics --- 00:32:03.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.368 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:03.368 13:17:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:07.569 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:07.569 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=925682 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 925682 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 925682 ']' 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.569 13:17:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.569 [2024-07-15 13:17:28.945244] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:32:07.569 [2024-07-15 13:17:28.945294] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.569 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.569 [2024-07-15 13:17:29.018396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.569 [2024-07-15 13:17:29.085138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.569 [2024-07-15 13:17:29.085175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.569 [2024-07-15 13:17:29.085183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.569 [2024-07-15 13:17:29.085189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.569 [2024-07-15 13:17:29.085195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.569 [2024-07-15 13:17:29.085287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.569 [2024-07-15 13:17:29.085439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.569 [2024-07-15 13:17:29.085649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.569 [2024-07-15 13:17:29.085650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.139 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:08.140 13:17:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.140 ************************************ 00:32:08.140 START TEST spdk_target_abort 00:32:08.140 ************************************ 00:32:08.140 13:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:08.140 13:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:08.140 13:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:08.140 13:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.140 13:17:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.400 spdk_targetn1 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.400 [2024-07-15 13:17:30.125344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.400 [2024-07-15 13:17:30.165581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:08.400 13:17:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.400 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.660 [2024-07-15 13:17:30.346661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:512 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:32:08.660 [2024-07-15 13:17:30.346689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0041 p:1 m:0 dnr:0 00:32:08.660 [2024-07-15 13:17:30.349432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:688 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:32:08.660 [2024-07-15 13:17:30.349448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:32:08.660 [2024-07-15 13:17:30.408724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2800 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:32:08.660 [2024-07-15 13:17:30.408742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:11.954 Initializing NVMe Controllers 00:32:11.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:11.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:11.954 Initialization complete. Launching workers. 00:32:11.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11651, failed: 3 00:32:11.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2760, failed to submit 8894 00:32:11.954 success 735, unsuccess 2025, failed 0 00:32:11.954 13:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.954 13:17:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.954 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.252 Initializing NVMe Controllers 00:32:15.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:15.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:15.252 Initialization complete. Launching workers. 00:32:15.252 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8400, failed: 0 00:32:15.252 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1215, failed to submit 7185 00:32:15.252 success 347, unsuccess 868, failed 0 00:32:15.252 13:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:15.252 13:17:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:15.252 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.252 [2024-07-15 13:17:37.060688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:158 nsid:1 lba:8960 len:8 PRP1 0x2000078ec000 PRP2 0x0 00:32:15.252 [2024-07-15 13:17:37.060728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:158 cdw0:0 sqhd:00ea p:0 m:0 dnr:0 00:32:18.548 Initializing NVMe Controllers 00:32:18.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:18.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:18.548 Initialization complete. Launching workers. 00:32:18.548 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42436, failed: 1 00:32:18.548 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2616, failed to submit 39821 00:32:18.548 success 612, unsuccess 2004, failed 0 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.548 13:17:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 925682 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 925682 ']' 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 925682 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 925682 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 925682' 00:32:20.459 killing process with pid 925682 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 925682 00:32:20.459 13:17:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 925682 00:32:20.459 00:32:20.459 real 0m12.242s 00:32:20.459 user 0m49.756s 00:32:20.459 sys 0m1.826s 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:20.459 ************************************ 00:32:20.459 END TEST spdk_target_abort 00:32:20.459 ************************************ 00:32:20.459 13:17:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:20.459 13:17:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:20.459 13:17:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:20.459 13:17:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.459 13:17:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:20.459 ************************************ 00:32:20.459 START TEST kernel_target_abort 00:32:20.459 ************************************ 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.459 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.460 13:17:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:24.688 Waiting for block devices as requested 00:32:24.688 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:24.688 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:24.948 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:24.948 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:24.948 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:24.948 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:25.209 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:25.209 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:25.209 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:25.469 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:25.469 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:25.469 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:25.469 No valid GPT data, bailing 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:25.469 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:25.730 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:25.730 00:32:25.730 Discovery Log Number of Records 2, Generation counter 2 00:32:25.730 =====Discovery Log Entry 0====== 00:32:25.730 trtype: tcp 00:32:25.730 adrfam: ipv4 00:32:25.730 subtype: current discovery subsystem 00:32:25.730 treq: not specified, sq flow control disable supported 00:32:25.731 portid: 1 00:32:25.731 trsvcid: 4420 00:32:25.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:25.731 traddr: 10.0.0.1 00:32:25.731 eflags: none 00:32:25.731 sectype: none 00:32:25.731 =====Discovery Log Entry 1====== 00:32:25.731 trtype: tcp 00:32:25.731 adrfam: ipv4 00:32:25.731 subtype: nvme subsystem 00:32:25.731 treq: not specified, sq flow control disable supported 00:32:25.731 portid: 1 00:32:25.731 trsvcid: 4420 00:32:25.731 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:25.731 traddr: 10.0.0.1 00:32:25.731 eflags: none 00:32:25.731 sectype: none 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:25.731 13:17:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.731 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.032 Initializing NVMe Controllers 00:32:29.032 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:29.032 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:29.032 Initialization complete. Launching workers. 00:32:29.032 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58845, failed: 0 00:32:29.032 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58845, failed to submit 0 00:32:29.032 success 0, unsuccess 58845, failed 0 00:32:29.032 13:17:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:29.033 13:17:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:29.033 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.329 Initializing NVMe Controllers 00:32:32.329 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:32.329 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:32.329 Initialization complete. Launching workers. 00:32:32.329 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101516, failed: 0 00:32:32.329 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25586, failed to submit 75930 00:32:32.329 success 0, unsuccess 25586, failed 0 00:32:32.329 13:17:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:32.329 13:17:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:32.329 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.873 Initializing NVMe Controllers 00:32:34.873 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.873 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.873 Initialization complete. Launching workers. 00:32:34.873 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97509, failed: 0 00:32:34.873 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24390, failed to submit 73119 00:32:34.873 success 0, unsuccess 24390, failed 0 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:34.873 13:17:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:39.081 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:39.081 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:40.464 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:40.725 00:32:40.725 real 0m20.232s 00:32:40.725 user 0m9.363s 00:32:40.725 sys 0m6.352s 00:32:40.725 13:18:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.725 13:18:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:40.725 ************************************ 00:32:40.725 END TEST kernel_target_abort 00:32:40.725 ************************************ 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.725 rmmod nvme_tcp 00:32:40.725 rmmod nvme_fabrics 00:32:40.725 rmmod nvme_keyring 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 925682 ']' 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 925682 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 925682 ']' 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 925682 00:32:40.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (925682) - No such process 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 925682 is not found' 00:32:40.725 Process with pid 925682 is not found 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:40.725 13:18:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:44.927 Waiting for block devices as requested 00:32:44.927 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:44.927 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:44.927 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:44.927 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:44.927 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:44.927 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:45.188 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:45.188 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:45.188 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:45.449 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:45.449 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:45.449 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:45.709 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:45.709 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:45.709 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:45.709 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:45.968 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:45.968 13:18:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.877 13:18:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.877 00:32:47.877 real 0m53.157s 00:32:47.877 user 1m4.900s 00:32:47.877 sys 0m19.689s 00:32:47.877 13:18:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.877 13:18:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:47.877 ************************************ 00:32:47.877 END TEST nvmf_abort_qd_sizes 00:32:47.877 ************************************ 00:32:48.138 13:18:09 -- common/autotest_common.sh@1142 -- # return 0 00:32:48.138 13:18:09 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:48.138 13:18:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:48.138 13:18:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.138 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:32:48.138 ************************************ 00:32:48.138 START TEST keyring_file 00:32:48.138 ************************************ 00:32:48.138 13:18:09 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:48.138 * Looking for test storage... 00:32:48.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:48.138 13:18:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:48.138 13:18:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.138 13:18:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.139 13:18:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.139 13:18:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.139 13:18:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.139 13:18:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.139 13:18:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.139 13:18:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.139 13:18:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:48.139 13:18:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YNJmDbWVPo 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YNJmDbWVPo 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YNJmDbWVPo 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.YNJmDbWVPo 00:32:48.139 13:18:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Z1TxK9kpNI 00:32:48.139 13:18:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:48.139 13:18:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:48.400 13:18:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Z1TxK9kpNI 00:32:48.400 13:18:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Z1TxK9kpNI 00:32:48.400 13:18:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Z1TxK9kpNI 00:32:48.400 13:18:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=936842 00:32:48.400 13:18:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 936842 00:32:48.400 13:18:10 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 936842 ']' 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.400 13:18:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.400 [2024-07-15 13:18:10.064440] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:32:48.400 [2024-07-15 13:18:10.064531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid936842 ] 00:32:48.400 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.400 [2024-07-15 13:18:10.137194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.400 [2024-07-15 13:18:10.216273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:49.344 13:18:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.344 [2024-07-15 13:18:10.847700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.344 null0 00:32:49.344 [2024-07-15 13:18:10.879743] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.344 [2024-07-15 13:18:10.880012] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:49.344 [2024-07-15 13:18:10.887755] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.344 13:18:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.344 13:18:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.344 [2024-07-15 13:18:10.899787] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:49.344 request: 00:32:49.344 { 00:32:49.344 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.344 "secure_channel": false, 00:32:49.345 "listen_address": { 00:32:49.345 "trtype": "tcp", 00:32:49.345 "traddr": "127.0.0.1", 00:32:49.345 "trsvcid": "4420" 00:32:49.345 }, 00:32:49.345 "method": "nvmf_subsystem_add_listener", 00:32:49.345 "req_id": 1 00:32:49.345 } 00:32:49.345 Got JSON-RPC error response 00:32:49.345 response: 00:32:49.345 { 00:32:49.345 "code": -32602, 00:32:49.345 "message": "Invalid parameters" 00:32:49.345 } 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:49.345 13:18:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=937127 00:32:49.345 13:18:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 937127 /var/tmp/bperf.sock 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 937127 ']' 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.345 13:18:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.345 13:18:10 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:49.345 [2024-07-15 13:18:10.953344] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:32:49.345 [2024-07-15 13:18:10.953391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937127 ] 00:32:49.345 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.345 [2024-07-15 13:18:11.033295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.345 [2024-07-15 13:18:11.097204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.915 13:18:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:49.915 13:18:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:49.915 13:18:11 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:49.915 13:18:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:50.175 13:18:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Z1TxK9kpNI 00:32:50.175 13:18:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Z1TxK9kpNI 00:32:50.434 13:18:12 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:50.434 13:18:12 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:50.434 13:18:12 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.YNJmDbWVPo == \/\t\m\p\/\t\m\p\.\Y\N\J\m\D\b\W\V\P\o ]] 00:32:50.434 13:18:12 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:50.434 13:18:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.434 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.694 13:18:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Z1TxK9kpNI == \/\t\m\p\/\t\m\p\.\Z\1\T\x\K\9\k\p\N\I ]] 00:32:50.694 13:18:12 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.694 13:18:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:50.694 13:18:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:50.694 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:50.955 13:18:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:50.955 13:18:12 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:50.955 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:50.955 [2024-07-15 13:18:12.773652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.216 nvme0n1 00:32:51.216 13:18:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:51.216 13:18:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:51.216 13:18:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.216 13:18:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:51.216 13:18:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.216 13:18:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.216 13:18:13 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:51.216 13:18:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:51.216 13:18:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:51.216 13:18:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:51.216 13:18:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.216 13:18:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.216 13:18:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:51.477 13:18:13 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:51.477 13:18:13 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.477 Running I/O for 1 seconds... 00:32:52.861 00:32:52.861 Latency(us) 00:32:52.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.861 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:52.861 nvme0n1 : 1.01 11670.19 45.59 0.00 0.00 10930.07 6116.69 22500.69 00:32:52.861 =================================================================================================================== 00:32:52.861 Total : 11670.19 45.59 0.00 0.00 10930.07 6116.69 22500.69 00:32:52.861 0 00:32:52.861 13:18:14 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:52.861 13:18:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.861 13:18:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:52.861 13:18:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:52.861 13:18:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.122 13:18:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:53.122 13:18:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:53.122 13:18:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.122 13:18:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:53.122 [2024-07-15 13:18:14.945159] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.122 [2024-07-15 13:18:14.945382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5450 (107): Transport endpoint is not connected 00:32:53.122 [2024-07-15 13:18:14.946378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f5450 (9): Bad file descriptor 00:32:53.383 [2024-07-15 13:18:14.947380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:53.383 [2024-07-15 13:18:14.947396] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.383 [2024-07-15 13:18:14.947402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:53.383 request: 00:32:53.383 { 00:32:53.383 "name": "nvme0", 00:32:53.383 "trtype": "tcp", 00:32:53.383 "traddr": "127.0.0.1", 00:32:53.383 "adrfam": "ipv4", 00:32:53.383 "trsvcid": "4420", 00:32:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.383 "prchk_reftag": false, 00:32:53.383 "prchk_guard": false, 00:32:53.383 "hdgst": false, 00:32:53.383 "ddgst": false, 00:32:53.383 "psk": "key1", 00:32:53.383 "method": "bdev_nvme_attach_controller", 00:32:53.383 "req_id": 1 00:32:53.383 } 00:32:53.383 Got JSON-RPC error response 00:32:53.383 response: 00:32:53.383 { 00:32:53.383 "code": -5, 00:32:53.383 "message": "Input/output error" 00:32:53.383 } 00:32:53.383 13:18:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:53.383 13:18:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:53.383 13:18:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:53.383 13:18:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:53.383 13:18:14 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:53.383 13:18:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.383 13:18:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.383 13:18:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.383 13:18:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.383 13:18:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.383 13:18:15 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:53.383 13:18:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:53.383 13:18:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:53.383 13:18:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.383 13:18:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.383 13:18:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:53.383 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.645 13:18:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:53.645 13:18:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:53.645 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:53.645 13:18:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:53.645 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:53.905 13:18:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:53.905 13:18:15 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:53.905 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.166 13:18:15 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:54.166 13:18:15 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.166 [2024-07-15 13:18:15.889195] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.YNJmDbWVPo': 0100660 00:32:54.166 [2024-07-15 13:18:15.889215] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:54.166 request: 00:32:54.166 { 00:32:54.166 "name": "key0", 00:32:54.166 "path": "/tmp/tmp.YNJmDbWVPo", 00:32:54.166 "method": "keyring_file_add_key", 00:32:54.166 "req_id": 1 00:32:54.166 } 00:32:54.166 Got JSON-RPC error response 00:32:54.166 response: 00:32:54.166 { 00:32:54.166 "code": -1, 00:32:54.166 "message": "Operation not permitted" 00:32:54.166 } 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:54.166 13:18:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:54.166 13:18:15 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.166 13:18:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YNJmDbWVPo 00:32:54.426 13:18:16 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.YNJmDbWVPo 00:32:54.426 13:18:16 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:54.426 13:18:16 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:54.426 13:18:16 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:54.426 13:18:16 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.426 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.687 [2024-07-15 13:18:16.362402] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.YNJmDbWVPo': No such file or directory 00:32:54.687 [2024-07-15 13:18:16.362416] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:54.687 [2024-07-15 13:18:16.362434] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:54.687 [2024-07-15 13:18:16.362439] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:54.687 [2024-07-15 13:18:16.362444] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:54.687 request: 00:32:54.687 { 00:32:54.687 "name": "nvme0", 00:32:54.687 "trtype": "tcp", 00:32:54.687 "traddr": "127.0.0.1", 00:32:54.687 "adrfam": "ipv4", 00:32:54.687 "trsvcid": "4420", 00:32:54.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.687 "prchk_reftag": false, 00:32:54.687 "prchk_guard": false, 00:32:54.687 "hdgst": false, 00:32:54.687 "ddgst": false, 00:32:54.687 "psk": "key0", 00:32:54.687 "method": "bdev_nvme_attach_controller", 00:32:54.687 "req_id": 1 00:32:54.687 } 00:32:54.687 Got JSON-RPC error response 00:32:54.687 response: 00:32:54.687 { 00:32:54.687 "code": -19, 00:32:54.687 "message": "No such device" 00:32:54.687 } 00:32:54.687 13:18:16 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:54.687 13:18:16 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:54.687 13:18:16 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:54.687 13:18:16 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:54.687 13:18:16 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:54.687 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:54.948 13:18:16 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:54.948 13:18:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yp3zp6XlPL 00:32:54.948 13:18:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:54.948 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:55.208 nvme0n1 00:32:55.208 13:18:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:55.208 13:18:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.208 13:18:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.208 13:18:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.208 13:18:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.208 13:18:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.469 13:18:17 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:55.469 13:18:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:55.469 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:55.469 13:18:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:55.469 13:18:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:55.469 13:18:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.469 13:18:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.469 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.729 13:18:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:55.729 13:18:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:55.729 13:18:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.729 13:18:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.729 13:18:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.729 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.729 13:18:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.990 13:18:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:55.990 13:18:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:55.990 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:55.990 13:18:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:55.990 13:18:17 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:55.990 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.250 13:18:17 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:56.250 13:18:17 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yp3zp6XlPL 00:32:56.250 13:18:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yp3zp6XlPL 00:32:56.250 13:18:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Z1TxK9kpNI 00:32:56.250 13:18:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Z1TxK9kpNI 00:32:56.510 13:18:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.510 13:18:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.769 nvme0n1 00:32:56.769 13:18:18 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:56.769 13:18:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:57.100 13:18:18 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:57.100 "subsystems": [ 00:32:57.100 { 00:32:57.100 "subsystem": "keyring", 00:32:57.100 "config": [ 00:32:57.100 { 00:32:57.100 "method": "keyring_file_add_key", 00:32:57.100 "params": { 00:32:57.100 "name": "key0", 00:32:57.100 "path": "/tmp/tmp.yp3zp6XlPL" 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "keyring_file_add_key", 00:32:57.100 "params": { 00:32:57.100 "name": "key1", 00:32:57.100 "path": "/tmp/tmp.Z1TxK9kpNI" 00:32:57.100 } 00:32:57.100 } 00:32:57.100 ] 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "subsystem": "iobuf", 00:32:57.100 "config": [ 00:32:57.100 { 00:32:57.100 "method": "iobuf_set_options", 00:32:57.100 "params": { 00:32:57.100 "small_pool_count": 8192, 00:32:57.100 "large_pool_count": 1024, 00:32:57.100 "small_bufsize": 8192, 00:32:57.100 "large_bufsize": 135168 00:32:57.100 } 00:32:57.100 } 00:32:57.100 ] 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "subsystem": "sock", 00:32:57.100 "config": [ 00:32:57.100 { 00:32:57.100 "method": "sock_set_default_impl", 00:32:57.100 "params": { 00:32:57.100 "impl_name": "posix" 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "sock_impl_set_options", 00:32:57.100 "params": { 00:32:57.100 "impl_name": "ssl", 00:32:57.100 "recv_buf_size": 4096, 00:32:57.100 "send_buf_size": 4096, 00:32:57.100 "enable_recv_pipe": true, 00:32:57.100 "enable_quickack": false, 00:32:57.100 "enable_placement_id": 0, 00:32:57.100 "enable_zerocopy_send_server": true, 00:32:57.100 "enable_zerocopy_send_client": false, 00:32:57.100 "zerocopy_threshold": 0, 00:32:57.100 "tls_version": 0, 00:32:57.100 "enable_ktls": false 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "sock_impl_set_options", 00:32:57.100 "params": { 00:32:57.100 "impl_name": "posix", 00:32:57.100 "recv_buf_size": 2097152, 00:32:57.100 "send_buf_size": 2097152, 00:32:57.100 "enable_recv_pipe": true, 00:32:57.100 "enable_quickack": false, 00:32:57.100 "enable_placement_id": 0, 00:32:57.100 "enable_zerocopy_send_server": true, 00:32:57.100 "enable_zerocopy_send_client": false, 00:32:57.100 "zerocopy_threshold": 0, 00:32:57.100 "tls_version": 0, 00:32:57.100 "enable_ktls": false 00:32:57.100 } 00:32:57.100 } 00:32:57.100 ] 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "subsystem": "vmd", 00:32:57.100 "config": [] 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "subsystem": "accel", 00:32:57.100 "config": [ 00:32:57.100 { 00:32:57.100 "method": "accel_set_options", 00:32:57.100 "params": { 00:32:57.100 "small_cache_size": 128, 00:32:57.100 "large_cache_size": 16, 00:32:57.100 "task_count": 2048, 00:32:57.100 "sequence_count": 2048, 00:32:57.100 "buf_count": 2048 00:32:57.100 } 00:32:57.100 } 00:32:57.100 ] 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "subsystem": "bdev", 00:32:57.100 "config": [ 00:32:57.100 { 00:32:57.100 "method": "bdev_set_options", 00:32:57.100 "params": { 00:32:57.100 "bdev_io_pool_size": 65535, 00:32:57.100 "bdev_io_cache_size": 256, 00:32:57.100 "bdev_auto_examine": true, 00:32:57.100 "iobuf_small_cache_size": 128, 00:32:57.100 "iobuf_large_cache_size": 16 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "bdev_raid_set_options", 00:32:57.100 "params": { 00:32:57.100 "process_window_size_kb": 1024 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "bdev_iscsi_set_options", 00:32:57.100 "params": { 00:32:57.100 "timeout_sec": 30 00:32:57.100 } 00:32:57.100 }, 00:32:57.100 { 00:32:57.100 "method": "bdev_nvme_set_options", 00:32:57.100 "params": { 00:32:57.100 "action_on_timeout": "none", 00:32:57.100 "timeout_us": 0, 00:32:57.100 "timeout_admin_us": 0, 00:32:57.101 "keep_alive_timeout_ms": 10000, 00:32:57.101 "arbitration_burst": 0, 00:32:57.101 "low_priority_weight": 0, 00:32:57.101 "medium_priority_weight": 0, 00:32:57.101 "high_priority_weight": 0, 00:32:57.101 "nvme_adminq_poll_period_us": 10000, 00:32:57.101 "nvme_ioq_poll_period_us": 0, 00:32:57.101 "io_queue_requests": 512, 00:32:57.101 "delay_cmd_submit": true, 00:32:57.101 "transport_retry_count": 4, 00:32:57.101 "bdev_retry_count": 3, 00:32:57.101 "transport_ack_timeout": 0, 00:32:57.101 "ctrlr_loss_timeout_sec": 0, 00:32:57.101 "reconnect_delay_sec": 0, 00:32:57.101 "fast_io_fail_timeout_sec": 0, 00:32:57.101 "disable_auto_failback": false, 00:32:57.101 "generate_uuids": false, 00:32:57.101 "transport_tos": 0, 00:32:57.101 "nvme_error_stat": false, 00:32:57.101 "rdma_srq_size": 0, 00:32:57.101 "io_path_stat": false, 00:32:57.101 "allow_accel_sequence": false, 00:32:57.101 "rdma_max_cq_size": 0, 00:32:57.101 "rdma_cm_event_timeout_ms": 0, 00:32:57.101 "dhchap_digests": [ 00:32:57.101 "sha256", 00:32:57.101 "sha384", 00:32:57.101 "sha512" 00:32:57.101 ], 00:32:57.101 "dhchap_dhgroups": [ 00:32:57.101 "null", 00:32:57.101 "ffdhe2048", 00:32:57.101 "ffdhe3072", 00:32:57.101 "ffdhe4096", 00:32:57.101 "ffdhe6144", 00:32:57.101 "ffdhe8192" 00:32:57.101 ] 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "bdev_nvme_attach_controller", 00:32:57.101 "params": { 00:32:57.101 "name": "nvme0", 00:32:57.101 "trtype": "TCP", 00:32:57.101 "adrfam": "IPv4", 00:32:57.101 "traddr": "127.0.0.1", 00:32:57.101 "trsvcid": "4420", 00:32:57.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.101 "prchk_reftag": false, 00:32:57.101 "prchk_guard": false, 00:32:57.101 "ctrlr_loss_timeout_sec": 0, 00:32:57.101 "reconnect_delay_sec": 0, 00:32:57.101 "fast_io_fail_timeout_sec": 0, 00:32:57.101 "psk": "key0", 00:32:57.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.101 "hdgst": false, 00:32:57.101 "ddgst": false 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "bdev_nvme_set_hotplug", 00:32:57.101 "params": { 00:32:57.101 "period_us": 100000, 00:32:57.101 "enable": false 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "bdev_wait_for_examine" 00:32:57.101 } 00:32:57.101 ] 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "subsystem": "nbd", 00:32:57.101 "config": [] 00:32:57.101 } 00:32:57.101 ] 00:32:57.101 }' 00:32:57.101 13:18:18 keyring_file -- keyring/file.sh@114 -- # killprocess 937127 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 937127 ']' 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@952 -- # kill -0 937127 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937127 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937127' 00:32:57.101 killing process with pid 937127 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@967 -- # kill 937127 00:32:57.101 Received shutdown signal, test time was about 1.000000 seconds 00:32:57.101 00:32:57.101 Latency(us) 00:32:57.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:57.101 =================================================================================================================== 00:32:57.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@972 -- # wait 937127 00:32:57.101 13:18:18 keyring_file -- keyring/file.sh@117 -- # bperfpid=938638 00:32:57.101 13:18:18 keyring_file -- keyring/file.sh@119 -- # waitforlisten 938638 /var/tmp/bperf.sock 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 938638 ']' 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.101 13:18:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:57.101 13:18:18 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:57.101 13:18:18 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:57.101 "subsystems": [ 00:32:57.101 { 00:32:57.101 "subsystem": "keyring", 00:32:57.101 "config": [ 00:32:57.101 { 00:32:57.101 "method": "keyring_file_add_key", 00:32:57.101 "params": { 00:32:57.101 "name": "key0", 00:32:57.101 "path": "/tmp/tmp.yp3zp6XlPL" 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "keyring_file_add_key", 00:32:57.101 "params": { 00:32:57.101 "name": "key1", 00:32:57.101 "path": "/tmp/tmp.Z1TxK9kpNI" 00:32:57.101 } 00:32:57.101 } 00:32:57.101 ] 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "subsystem": "iobuf", 00:32:57.101 "config": [ 00:32:57.101 { 00:32:57.101 "method": "iobuf_set_options", 00:32:57.101 "params": { 00:32:57.101 "small_pool_count": 8192, 00:32:57.101 "large_pool_count": 1024, 00:32:57.101 "small_bufsize": 8192, 00:32:57.101 "large_bufsize": 135168 00:32:57.101 } 00:32:57.101 } 00:32:57.101 ] 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "subsystem": "sock", 00:32:57.101 "config": [ 00:32:57.101 { 00:32:57.101 "method": "sock_set_default_impl", 00:32:57.101 "params": { 00:32:57.101 "impl_name": "posix" 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "sock_impl_set_options", 00:32:57.101 "params": { 00:32:57.101 "impl_name": "ssl", 00:32:57.101 "recv_buf_size": 4096, 00:32:57.101 "send_buf_size": 4096, 00:32:57.101 "enable_recv_pipe": true, 00:32:57.101 "enable_quickack": false, 00:32:57.101 "enable_placement_id": 0, 00:32:57.101 "enable_zerocopy_send_server": true, 00:32:57.101 "enable_zerocopy_send_client": false, 00:32:57.101 "zerocopy_threshold": 0, 00:32:57.101 "tls_version": 0, 00:32:57.101 "enable_ktls": false 00:32:57.101 } 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "method": "sock_impl_set_options", 00:32:57.101 "params": { 00:32:57.101 "impl_name": "posix", 00:32:57.101 "recv_buf_size": 2097152, 00:32:57.101 "send_buf_size": 2097152, 00:32:57.101 "enable_recv_pipe": true, 00:32:57.101 "enable_quickack": false, 00:32:57.101 "enable_placement_id": 0, 00:32:57.101 "enable_zerocopy_send_server": true, 00:32:57.101 "enable_zerocopy_send_client": false, 00:32:57.101 "zerocopy_threshold": 0, 00:32:57.101 "tls_version": 0, 00:32:57.101 "enable_ktls": false 00:32:57.101 } 00:32:57.101 } 00:32:57.101 ] 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "subsystem": "vmd", 00:32:57.101 "config": [] 00:32:57.101 }, 00:32:57.101 { 00:32:57.101 "subsystem": "accel", 00:32:57.101 "config": [ 00:32:57.101 { 00:32:57.101 "method": "accel_set_options", 00:32:57.101 "params": { 00:32:57.101 "small_cache_size": 128, 00:32:57.101 "large_cache_size": 16, 00:32:57.101 "task_count": 2048, 00:32:57.101 "sequence_count": 2048, 00:32:57.102 "buf_count": 2048 00:32:57.102 } 00:32:57.102 } 00:32:57.102 ] 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "subsystem": "bdev", 00:32:57.102 "config": [ 00:32:57.102 { 00:32:57.102 "method": "bdev_set_options", 00:32:57.102 "params": { 00:32:57.102 "bdev_io_pool_size": 65535, 00:32:57.102 "bdev_io_cache_size": 256, 00:32:57.102 "bdev_auto_examine": true, 00:32:57.102 "iobuf_small_cache_size": 128, 00:32:57.102 "iobuf_large_cache_size": 16 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_raid_set_options", 00:32:57.102 "params": { 00:32:57.102 "process_window_size_kb": 1024 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_iscsi_set_options", 00:32:57.102 "params": { 00:32:57.102 "timeout_sec": 30 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_nvme_set_options", 00:32:57.102 "params": { 00:32:57.102 "action_on_timeout": "none", 00:32:57.102 "timeout_us": 0, 00:32:57.102 "timeout_admin_us": 0, 00:32:57.102 "keep_alive_timeout_ms": 10000, 00:32:57.102 "arbitration_burst": 0, 00:32:57.102 "low_priority_weight": 0, 00:32:57.102 "medium_priority_weight": 0, 00:32:57.102 "high_priority_weight": 0, 00:32:57.102 "nvme_adminq_poll_period_us": 10000, 00:32:57.102 "nvme_ioq_poll_period_us": 0, 00:32:57.102 "io_queue_requests": 512, 00:32:57.102 "delay_cmd_submit": true, 00:32:57.102 "transport_retry_count": 4, 00:32:57.102 "bdev_retry_count": 3, 00:32:57.102 "transport_ack_timeout": 0, 00:32:57.102 "ctrlr_loss_timeout_sec": 0, 00:32:57.102 "reconnect_delay_sec": 0, 00:32:57.102 "fast_io_fail_timeout_sec": 0, 00:32:57.102 "disable_auto_failback": false, 00:32:57.102 "generate_uuids": false, 00:32:57.102 "transport_tos": 0, 00:32:57.102 "nvme_error_stat": false, 00:32:57.102 "rdma_srq_size": 0, 00:32:57.102 "io_path_stat": false, 00:32:57.102 "allow_accel_sequence": false, 00:32:57.102 "rdma_max_cq_size": 0, 00:32:57.102 "rdma_cm_event_timeout_ms": 0, 00:32:57.102 "dhchap_digests": [ 00:32:57.102 "sha256", 00:32:57.102 "sha384", 00:32:57.102 "sha512" 00:32:57.102 ], 00:32:57.102 "dhchap_dhgroups": [ 00:32:57.102 "null", 00:32:57.102 "ffdhe2048", 00:32:57.102 "ffdhe3072", 00:32:57.102 "ffdhe4096", 00:32:57.102 "ffdhe6144", 00:32:57.102 "ffdhe8192" 00:32:57.102 ] 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_nvme_attach_controller", 00:32:57.102 "params": { 00:32:57.102 "name": "nvme0", 00:32:57.102 "trtype": "TCP", 00:32:57.102 "adrfam": "IPv4", 00:32:57.102 "traddr": "127.0.0.1", 00:32:57.102 "trsvcid": "4420", 00:32:57.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.102 "prchk_reftag": false, 00:32:57.102 "prchk_guard": false, 00:32:57.102 "ctrlr_loss_timeout_sec": 0, 00:32:57.102 "reconnect_delay_sec": 0, 00:32:57.102 "fast_io_fail_timeout_sec": 0, 00:32:57.102 "psk": "key0", 00:32:57.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.102 "hdgst": false, 00:32:57.102 "ddgst": false 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_nvme_set_hotplug", 00:32:57.102 "params": { 00:32:57.102 "period_us": 100000, 00:32:57.102 "enable": false 00:32:57.102 } 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "method": "bdev_wait_for_examine" 00:32:57.102 } 00:32:57.102 ] 00:32:57.102 }, 00:32:57.102 { 00:32:57.102 "subsystem": "nbd", 00:32:57.102 "config": [] 00:32:57.102 } 00:32:57.102 ] 00:32:57.102 }' 00:32:57.102 [2024-07-15 13:18:18.877902] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:32:57.102 [2024-07-15 13:18:18.877959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938638 ] 00:32:57.362 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.362 [2024-07-15 13:18:18.956568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.362 [2024-07-15 13:18:19.010237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.362 [2024-07-15 13:18:19.152458] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:57.932 13:18:19 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.932 13:18:19 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:57.932 13:18:19 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:57.932 13:18:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.932 13:18:19 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:58.193 13:18:19 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:58.193 13:18:19 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.193 13:18:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:58.193 13:18:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.193 13:18:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:58.453 13:18:20 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:58.453 13:18:20 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:58.453 13:18:20 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:58.453 13:18:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:58.713 13:18:20 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:58.713 13:18:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:58.713 13:18:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yp3zp6XlPL /tmp/tmp.Z1TxK9kpNI 00:32:58.713 13:18:20 keyring_file -- keyring/file.sh@20 -- # killprocess 938638 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 938638 ']' 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 938638 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938638 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938638' 00:32:58.713 killing process with pid 938638 00:32:58.713 13:18:20 keyring_file -- common/autotest_common.sh@967 -- # kill 938638 00:32:58.713 Received shutdown signal, test time was about 1.000000 seconds 00:32:58.713 00:32:58.713 Latency(us) 00:32:58.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.713 =================================================================================================================== 00:32:58.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@972 -- # wait 938638 00:32:58.714 13:18:20 keyring_file -- keyring/file.sh@21 -- # killprocess 936842 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 936842 ']' 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@952 -- # kill -0 936842 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936842 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936842' 00:32:58.714 killing process with pid 936842 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@967 -- # kill 936842 00:32:58.714 [2024-07-15 13:18:20.517394] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:58.714 13:18:20 keyring_file -- common/autotest_common.sh@972 -- # wait 936842 00:32:58.974 00:32:58.974 real 0m10.988s 00:32:58.974 user 0m25.787s 00:32:58.974 sys 0m2.704s 00:32:58.974 13:18:20 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:58.974 13:18:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:58.974 ************************************ 00:32:58.974 END TEST keyring_file 00:32:58.974 ************************************ 00:32:58.974 13:18:20 -- common/autotest_common.sh@1142 -- # return 0 00:32:58.974 13:18:20 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:58.974 13:18:20 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:58.974 13:18:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:58.974 13:18:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.974 13:18:20 -- common/autotest_common.sh@10 -- # set +x 00:32:59.235 ************************************ 00:32:59.235 START TEST keyring_linux 00:32:59.235 ************************************ 00:32:59.236 13:18:20 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:59.236 * Looking for test storage... 00:32:59.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.236 13:18:20 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.236 13:18:20 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.236 13:18:20 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.236 13:18:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.236 13:18:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.236 13:18:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.236 13:18:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:59.236 13:18:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:59.236 /tmp/:spdk-test:key0 00:32:59.236 13:18:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:59.236 13:18:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:59.236 13:18:20 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:59.236 13:18:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:59.236 13:18:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:59.236 /tmp/:spdk-test:key1 00:32:59.236 13:18:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:59.236 13:18:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=939225 00:32:59.236 13:18:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 939225 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 939225 ']' 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:59.236 13:18:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:59.498 [2024-07-15 13:18:21.072886] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:32:59.498 [2024-07-15 13:18:21.072956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939225 ] 00:32:59.498 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.498 [2024-07-15 13:18:21.147835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.498 [2024-07-15 13:18:21.224221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.070 13:18:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.070 13:18:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:00.071 13:18:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:00.071 13:18:21 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.071 13:18:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:00.071 [2024-07-15 13:18:21.875008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.071 null0 00:33:00.331 [2024-07-15 13:18:21.907040] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:00.331 [2024-07-15 13:18:21.907431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:00.331 13:18:21 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.331 13:18:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:00.331 998674864 00:33:00.331 13:18:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:00.331 976862270 00:33:00.331 13:18:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=939383 00:33:00.331 13:18:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 939383 /var/tmp/bperf.sock 00:33:00.331 13:18:21 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 939383 ']' 00:33:00.331 13:18:21 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.331 13:18:21 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.331 13:18:21 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.332 13:18:21 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.332 13:18:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:00.332 13:18:21 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:00.332 [2024-07-15 13:18:21.978122] Starting SPDK v24.09-pre git sha1 c6070605c / DPDK 24.03.0 initialization... 00:33:00.332 [2024-07-15 13:18:21.978172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939383 ] 00:33:00.332 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.332 [2024-07-15 13:18:22.058294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.332 [2024-07-15 13:18:22.111618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.276 13:18:22 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.276 13:18:22 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:01.276 13:18:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:01.276 13:18:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:01.276 13:18:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:01.276 13:18:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:01.276 13:18:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:01.276 13:18:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:01.542 [2024-07-15 13:18:23.234972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:01.542 nvme0n1 00:33:01.542 13:18:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:01.542 13:18:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:01.542 13:18:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:01.542 13:18:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:01.542 13:18:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:01.542 13:18:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.805 13:18:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:01.806 13:18:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:01.806 13:18:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:01.806 13:18:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:01.806 13:18:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.806 13:18:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.806 13:18:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@25 -- # sn=998674864 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 998674864 == \9\9\8\6\7\4\8\6\4 ]] 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 998674864 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:02.066 13:18:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:02.066 Running I/O for 1 seconds... 00:33:03.008 00:33:03.008 Latency(us) 00:33:03.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.008 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:03.008 nvme0n1 : 1.01 11641.63 45.48 0.00 0.00 10933.16 5106.35 14308.69 00:33:03.008 =================================================================================================================== 00:33:03.008 Total : 11641.63 45.48 0.00 0.00 10933.16 5106.35 14308.69 00:33:03.008 0 00:33:03.008 13:18:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:03.008 13:18:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:03.270 13:18:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:03.270 13:18:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:03.270 13:18:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:03.270 13:18:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:03.270 13:18:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:03.270 13:18:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.270 13:18:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:03.270 13:18:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:03.270 13:18:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:03.270 13:18:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:03.270 13:18:25 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:03.270 13:18:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:03.532 [2024-07-15 13:18:25.218605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:03.532 [2024-07-15 13:18:25.218699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f000 (107): Transport endpoint is not connected 00:33:03.532 [2024-07-15 13:18:25.219694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x76f000 (9): Bad file descriptor 00:33:03.532 [2024-07-15 13:18:25.220697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:03.532 [2024-07-15 13:18:25.220705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:03.532 [2024-07-15 13:18:25.220711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:03.532 request: 00:33:03.532 { 00:33:03.532 "name": "nvme0", 00:33:03.532 "trtype": "tcp", 00:33:03.532 "traddr": "127.0.0.1", 00:33:03.532 "adrfam": "ipv4", 00:33:03.532 "trsvcid": "4420", 00:33:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:03.532 "prchk_reftag": false, 00:33:03.532 "prchk_guard": false, 00:33:03.532 "hdgst": false, 00:33:03.532 "ddgst": false, 00:33:03.532 "psk": ":spdk-test:key1", 00:33:03.532 "method": "bdev_nvme_attach_controller", 00:33:03.532 "req_id": 1 00:33:03.532 } 00:33:03.532 Got JSON-RPC error response 00:33:03.532 response: 00:33:03.532 { 00:33:03.532 "code": -5, 00:33:03.532 "message": "Input/output error" 00:33:03.532 } 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@33 -- # sn=998674864 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 998674864 00:33:03.532 1 links removed 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@33 -- # sn=976862270 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 976862270 00:33:03.532 1 links removed 00:33:03.532 13:18:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 939383 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 939383 ']' 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 939383 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939383 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939383' 00:33:03.532 killing process with pid 939383 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 939383 00:33:03.532 Received shutdown signal, test time was about 1.000000 seconds 00:33:03.532 00:33:03.532 Latency(us) 00:33:03.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.532 =================================================================================================================== 00:33:03.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.532 13:18:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 939383 00:33:03.793 13:18:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 939225 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 939225 ']' 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 939225 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939225 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939225' 00:33:03.793 killing process with pid 939225 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@967 -- # kill 939225 00:33:03.793 13:18:25 keyring_linux -- common/autotest_common.sh@972 -- # wait 939225 00:33:04.054 00:33:04.054 real 0m4.870s 00:33:04.054 user 0m8.461s 00:33:04.054 sys 0m1.456s 00:33:04.054 13:18:25 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:04.054 13:18:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:04.054 ************************************ 00:33:04.054 END TEST keyring_linux 00:33:04.054 ************************************ 00:33:04.054 13:18:25 -- common/autotest_common.sh@1142 -- # return 0 00:33:04.054 13:18:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:04.054 13:18:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:04.055 13:18:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:04.055 13:18:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:04.055 13:18:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:04.055 13:18:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:04.055 13:18:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:04.055 13:18:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:04.055 13:18:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:04.055 13:18:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:04.055 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:33:04.055 13:18:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:04.055 13:18:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:04.055 13:18:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:04.055 13:18:25 -- common/autotest_common.sh@10 -- # set +x 00:33:12.194 INFO: APP EXITING 00:33:12.194 INFO: killing all VMs 00:33:12.194 INFO: killing vhost app 00:33:12.194 INFO: EXIT DONE 00:33:15.491 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:15.491 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:15.492 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:15.492 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:19.694 Cleaning 00:33:19.694 Removing: /var/run/dpdk/spdk0/config 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:19.694 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:19.694 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:19.694 Removing: /var/run/dpdk/spdk1/config 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:19.694 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:19.694 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:19.694 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:19.694 Removing: /var/run/dpdk/spdk2/config 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:19.694 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:19.694 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:19.694 Removing: /var/run/dpdk/spdk3/config 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:19.694 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:19.694 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:19.694 Removing: /var/run/dpdk/spdk4/config 00:33:19.694 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:19.695 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:19.695 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:19.695 Removing: /dev/shm/bdev_svc_trace.1 00:33:19.695 Removing: /dev/shm/nvmf_trace.0 00:33:19.695 Removing: /dev/shm/spdk_tgt_trace.pid454367 00:33:19.695 Removing: /var/run/dpdk/spdk0 00:33:19.695 Removing: /var/run/dpdk/spdk1 00:33:19.695 Removing: /var/run/dpdk/spdk2 00:33:19.695 Removing: /var/run/dpdk/spdk3 00:33:19.695 Removing: /var/run/dpdk/spdk4 00:33:19.695 Removing: /var/run/dpdk/spdk_pid452749 00:33:19.695 Removing: /var/run/dpdk/spdk_pid454367 00:33:19.695 Removing: /var/run/dpdk/spdk_pid454956 00:33:19.695 Removing: /var/run/dpdk/spdk_pid456017 00:33:19.695 Removing: /var/run/dpdk/spdk_pid456345 00:33:19.695 Removing: /var/run/dpdk/spdk_pid457433 00:33:19.695 Removing: /var/run/dpdk/spdk_pid457747 00:33:19.695 Removing: /var/run/dpdk/spdk_pid457943 00:33:19.695 Removing: /var/run/dpdk/spdk_pid458994 00:33:19.695 Removing: /var/run/dpdk/spdk_pid459673 00:33:19.695 Removing: /var/run/dpdk/spdk_pid459954 00:33:19.695 Removing: /var/run/dpdk/spdk_pid460231 00:33:19.695 Removing: /var/run/dpdk/spdk_pid460631 00:33:19.695 Removing: /var/run/dpdk/spdk_pid461017 00:33:19.695 Removing: /var/run/dpdk/spdk_pid461374 00:33:19.695 Removing: /var/run/dpdk/spdk_pid461602 00:33:19.695 Removing: /var/run/dpdk/spdk_pid461827 00:33:19.695 Removing: /var/run/dpdk/spdk_pid463176 00:33:19.695 Removing: /var/run/dpdk/spdk_pid466983 00:33:19.695 Removing: /var/run/dpdk/spdk_pid467343 00:33:19.695 Removing: /var/run/dpdk/spdk_pid467710 00:33:19.695 Removing: /var/run/dpdk/spdk_pid467739 00:33:19.695 Removing: /var/run/dpdk/spdk_pid468339 00:33:19.695 Removing: /var/run/dpdk/spdk_pid468434 00:33:19.695 Removing: /var/run/dpdk/spdk_pid468874 00:33:19.695 Removing: /var/run/dpdk/spdk_pid469144 00:33:19.695 Removing: /var/run/dpdk/spdk_pid469504 00:33:19.695 Removing: /var/run/dpdk/spdk_pid469521 00:33:19.695 Removing: /var/run/dpdk/spdk_pid469879 00:33:19.695 Removing: /var/run/dpdk/spdk_pid469903 00:33:19.695 Removing: /var/run/dpdk/spdk_pid470437 00:33:19.695 Removing: /var/run/dpdk/spdk_pid470687 00:33:19.695 Removing: /var/run/dpdk/spdk_pid471074 00:33:19.695 Removing: /var/run/dpdk/spdk_pid471448 00:33:19.695 Removing: /var/run/dpdk/spdk_pid471470 00:33:19.695 Removing: /var/run/dpdk/spdk_pid471583 00:33:19.695 Removing: /var/run/dpdk/spdk_pid471889 00:33:19.695 Removing: /var/run/dpdk/spdk_pid472241 00:33:19.695 Removing: /var/run/dpdk/spdk_pid472595 00:33:19.695 Removing: /var/run/dpdk/spdk_pid472865 00:33:19.695 Removing: /var/run/dpdk/spdk_pid473056 00:33:19.695 Removing: /var/run/dpdk/spdk_pid473337 00:33:19.695 Removing: /var/run/dpdk/spdk_pid473684 00:33:19.695 Removing: /var/run/dpdk/spdk_pid474033 00:33:19.695 Removing: /var/run/dpdk/spdk_pid474305 00:33:19.695 Removing: /var/run/dpdk/spdk_pid474502 00:33:19.695 Removing: /var/run/dpdk/spdk_pid474772 00:33:19.695 Removing: /var/run/dpdk/spdk_pid475125 00:33:19.695 Removing: /var/run/dpdk/spdk_pid475474 00:33:19.695 Removing: /var/run/dpdk/spdk_pid475774 00:33:19.695 Removing: /var/run/dpdk/spdk_pid475968 00:33:19.695 Removing: /var/run/dpdk/spdk_pid476213 00:33:19.695 Removing: /var/run/dpdk/spdk_pid476565 00:33:19.695 Removing: /var/run/dpdk/spdk_pid476924 00:33:19.695 Removing: /var/run/dpdk/spdk_pid477273 00:33:19.695 Removing: /var/run/dpdk/spdk_pid477487 00:33:19.695 Removing: /var/run/dpdk/spdk_pid477695 00:33:19.695 Removing: /var/run/dpdk/spdk_pid478105 00:33:19.695 Removing: /var/run/dpdk/spdk_pid482930 00:33:19.695 Removing: /var/run/dpdk/spdk_pid541217 00:33:19.695 Removing: /var/run/dpdk/spdk_pid546933 00:33:19.695 Removing: /var/run/dpdk/spdk_pid559060 00:33:19.695 Removing: /var/run/dpdk/spdk_pid566052 00:33:19.695 Removing: /var/run/dpdk/spdk_pid571424 00:33:19.695 Removing: /var/run/dpdk/spdk_pid572099 00:33:19.695 Removing: /var/run/dpdk/spdk_pid580511 00:33:19.695 Removing: /var/run/dpdk/spdk_pid588070 00:33:19.695 Removing: /var/run/dpdk/spdk_pid588076 00:33:19.695 Removing: /var/run/dpdk/spdk_pid589080 00:33:19.695 Removing: /var/run/dpdk/spdk_pid590093 00:33:19.695 Removing: /var/run/dpdk/spdk_pid591098 00:33:19.695 Removing: /var/run/dpdk/spdk_pid591770 00:33:19.695 Removing: /var/run/dpdk/spdk_pid591788 00:33:19.695 Removing: /var/run/dpdk/spdk_pid592109 00:33:19.954 Removing: /var/run/dpdk/spdk_pid592251 00:33:19.954 Removing: /var/run/dpdk/spdk_pid592378 00:33:19.954 Removing: /var/run/dpdk/spdk_pid593448 00:33:19.954 Removing: /var/run/dpdk/spdk_pid594453 00:33:19.954 Removing: /var/run/dpdk/spdk_pid595460 00:33:19.954 Removing: /var/run/dpdk/spdk_pid596131 00:33:19.954 Removing: /var/run/dpdk/spdk_pid596140 00:33:19.954 Removing: /var/run/dpdk/spdk_pid596471 00:33:19.954 Removing: /var/run/dpdk/spdk_pid597788 00:33:19.954 Removing: /var/run/dpdk/spdk_pid599005 00:33:19.954 Removing: /var/run/dpdk/spdk_pid609674 00:33:19.954 Removing: /var/run/dpdk/spdk_pid610028 00:33:19.954 Removing: /var/run/dpdk/spdk_pid615738 00:33:19.954 Removing: /var/run/dpdk/spdk_pid623258 00:33:19.954 Removing: /var/run/dpdk/spdk_pid626761 00:33:19.954 Removing: /var/run/dpdk/spdk_pid639906 00:33:19.954 Removing: /var/run/dpdk/spdk_pid651710 00:33:19.954 Removing: /var/run/dpdk/spdk_pid653712 00:33:19.954 Removing: /var/run/dpdk/spdk_pid654723 00:33:19.954 Removing: /var/run/dpdk/spdk_pid676604 00:33:19.954 Removing: /var/run/dpdk/spdk_pid682073 00:33:19.954 Removing: /var/run/dpdk/spdk_pid712981 00:33:19.954 Removing: /var/run/dpdk/spdk_pid718865 00:33:19.954 Removing: /var/run/dpdk/spdk_pid720720 00:33:19.954 Removing: /var/run/dpdk/spdk_pid722992 00:33:19.954 Removing: /var/run/dpdk/spdk_pid723328 00:33:19.954 Removing: /var/run/dpdk/spdk_pid723444 00:33:19.954 Removing: /var/run/dpdk/spdk_pid723949 00:33:19.954 Removing: /var/run/dpdk/spdk_pid724849 00:33:19.954 Removing: /var/run/dpdk/spdk_pid727013 00:33:19.954 Removing: /var/run/dpdk/spdk_pid727986 00:33:19.954 Removing: /var/run/dpdk/spdk_pid728650 00:33:19.954 Removing: /var/run/dpdk/spdk_pid731218 00:33:19.954 Removing: /var/run/dpdk/spdk_pid731975 00:33:19.954 Removing: /var/run/dpdk/spdk_pid732778 00:33:19.954 Removing: /var/run/dpdk/spdk_pid738187 00:33:19.954 Removing: /var/run/dpdk/spdk_pid751142 00:33:19.954 Removing: /var/run/dpdk/spdk_pid755965 00:33:19.954 Removing: /var/run/dpdk/spdk_pid763781 00:33:19.954 Removing: /var/run/dpdk/spdk_pid765318 00:33:19.954 Removing: /var/run/dpdk/spdk_pid766862 00:33:19.954 Removing: /var/run/dpdk/spdk_pid772727 00:33:19.954 Removing: /var/run/dpdk/spdk_pid778569 00:33:19.954 Removing: /var/run/dpdk/spdk_pid788390 00:33:19.954 Removing: /var/run/dpdk/spdk_pid788517 00:33:19.954 Removing: /var/run/dpdk/spdk_pid794066 00:33:19.954 Removing: /var/run/dpdk/spdk_pid794354 00:33:19.954 Removing: /var/run/dpdk/spdk_pid794498 00:33:19.954 Removing: /var/run/dpdk/spdk_pid795077 00:33:19.954 Removing: /var/run/dpdk/spdk_pid795084 00:33:19.954 Removing: /var/run/dpdk/spdk_pid801019 00:33:19.954 Removing: /var/run/dpdk/spdk_pid801640 00:33:19.954 Removing: /var/run/dpdk/spdk_pid807501 00:33:19.954 Removing: /var/run/dpdk/spdk_pid810700 00:33:19.954 Removing: /var/run/dpdk/spdk_pid817593 00:33:19.954 Removing: /var/run/dpdk/spdk_pid824655 00:33:19.954 Removing: /var/run/dpdk/spdk_pid835600 00:33:19.954 Removing: /var/run/dpdk/spdk_pid844743 00:33:19.954 Removing: /var/run/dpdk/spdk_pid844745 00:33:19.954 Removing: /var/run/dpdk/spdk_pid868706 00:33:19.954 Removing: /var/run/dpdk/spdk_pid869395 00:33:19.954 Removing: /var/run/dpdk/spdk_pid870072 00:33:20.213 Removing: /var/run/dpdk/spdk_pid870762 00:33:20.213 Removing: /var/run/dpdk/spdk_pid871818 00:33:20.213 Removing: /var/run/dpdk/spdk_pid872510 00:33:20.213 Removing: /var/run/dpdk/spdk_pid873190 00:33:20.213 Removing: /var/run/dpdk/spdk_pid873876 00:33:20.214 Removing: /var/run/dpdk/spdk_pid879596 00:33:20.214 Removing: /var/run/dpdk/spdk_pid879927 00:33:20.214 Removing: /var/run/dpdk/spdk_pid888184 00:33:20.214 Removing: /var/run/dpdk/spdk_pid888359 00:33:20.214 Removing: /var/run/dpdk/spdk_pid891084 00:33:20.214 Removing: /var/run/dpdk/spdk_pid898734 00:33:20.214 Removing: /var/run/dpdk/spdk_pid898787 00:33:20.214 Removing: /var/run/dpdk/spdk_pid905363 00:33:20.214 Removing: /var/run/dpdk/spdk_pid907755 00:33:20.214 Removing: /var/run/dpdk/spdk_pid909952 00:33:20.214 Removing: /var/run/dpdk/spdk_pid911450 00:33:20.214 Removing: /var/run/dpdk/spdk_pid913728 00:33:20.214 Removing: /var/run/dpdk/spdk_pid915181 00:33:20.214 Removing: /var/run/dpdk/spdk_pid926015 00:33:20.214 Removing: /var/run/dpdk/spdk_pid926521 00:33:20.214 Removing: /var/run/dpdk/spdk_pid927079 00:33:20.214 Removing: /var/run/dpdk/spdk_pid930120 00:33:20.214 Removing: /var/run/dpdk/spdk_pid930790 00:33:20.214 Removing: /var/run/dpdk/spdk_pid931387 00:33:20.214 Removing: /var/run/dpdk/spdk_pid936842 00:33:20.214 Removing: /var/run/dpdk/spdk_pid937127 00:33:20.214 Removing: /var/run/dpdk/spdk_pid938638 00:33:20.214 Removing: /var/run/dpdk/spdk_pid939225 00:33:20.214 Removing: /var/run/dpdk/spdk_pid939383 00:33:20.214 Clean 00:33:20.214 13:18:41 -- common/autotest_common.sh@1451 -- # return 0 00:33:20.214 13:18:41 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:20.214 13:18:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.214 13:18:41 -- common/autotest_common.sh@10 -- # set +x 00:33:20.214 13:18:42 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:20.214 13:18:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:20.214 13:18:42 -- common/autotest_common.sh@10 -- # set +x 00:33:20.474 13:18:42 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:20.474 13:18:42 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:20.474 13:18:42 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:20.474 13:18:42 -- spdk/autotest.sh@391 -- # hash lcov 00:33:20.474 13:18:42 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:20.474 13:18:42 -- spdk/autotest.sh@393 -- # hostname 00:33:20.474 13:18:42 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:20.474 geninfo: WARNING: invalid characters removed from testname! 00:33:47.055 13:19:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:47.998 13:19:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:49.909 13:19:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:51.291 13:19:12 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:52.728 13:19:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:54.646 13:19:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:56.038 13:19:17 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:56.038 13:19:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.038 13:19:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:56.038 13:19:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.038 13:19:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.038 13:19:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.038 13:19:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.038 13:19:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.038 13:19:17 -- paths/export.sh@5 -- $ export PATH 00:33:56.038 13:19:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.038 13:19:17 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:56.038 13:19:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:56.038 13:19:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721042357.XXXXXX 00:33:56.038 13:19:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721042357.YpnQ1i 00:33:56.038 13:19:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:56.038 13:19:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:56.038 13:19:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:56.038 13:19:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:56.038 13:19:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:56.038 13:19:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:56.038 13:19:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:56.038 13:19:17 -- common/autotest_common.sh@10 -- $ set +x 00:33:56.038 13:19:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:56.038 13:19:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:56.038 13:19:17 -- pm/common@17 -- $ local monitor 00:33:56.038 13:19:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.038 13:19:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.038 13:19:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.038 13:19:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.038 13:19:17 -- pm/common@21 -- $ date +%s 00:33:56.038 13:19:17 -- pm/common@25 -- $ sleep 1 00:33:56.038 13:19:17 -- pm/common@21 -- $ date +%s 00:33:56.038 13:19:17 -- pm/common@21 -- $ date +%s 00:33:56.038 13:19:17 -- pm/common@21 -- $ date +%s 00:33:56.038 13:19:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042357 00:33:56.038 13:19:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042357 00:33:56.038 13:19:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042357 00:33:56.038 13:19:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721042357 00:33:56.038 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042357_collect-vmstat.pm.log 00:33:56.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042357_collect-cpu-load.pm.log 00:33:56.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042357_collect-cpu-temp.pm.log 00:33:56.039 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721042357_collect-bmc-pm.bmc.pm.log 00:33:56.981 13:19:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:56.981 13:19:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:56.981 13:19:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:56.981 13:19:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:56.981 13:19:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:56.981 13:19:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:56.981 13:19:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:56.981 13:19:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:56.981 13:19:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:56.981 13:19:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:56.981 13:19:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:56.981 13:19:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:56.981 13:19:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:56.981 13:19:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.981 13:19:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:56.981 13:19:18 -- pm/common@44 -- $ pid=952135 00:33:56.981 13:19:18 -- pm/common@50 -- $ kill -TERM 952135 00:33:56.981 13:19:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.981 13:19:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:56.981 13:19:18 -- pm/common@44 -- $ pid=952136 00:33:56.981 13:19:18 -- pm/common@50 -- $ kill -TERM 952136 00:33:56.981 13:19:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.981 13:19:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:56.981 13:19:18 -- pm/common@44 -- $ pid=952138 00:33:56.981 13:19:18 -- pm/common@50 -- $ kill -TERM 952138 00:33:56.981 13:19:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:56.981 13:19:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:56.981 13:19:18 -- pm/common@44 -- $ pid=952154 00:33:56.981 13:19:18 -- pm/common@50 -- $ sudo -E kill -TERM 952154 00:33:56.981 + [[ -n 328416 ]] 00:33:56.981 + sudo kill 328416 00:33:57.253 [Pipeline] } 00:33:57.277 [Pipeline] // stage 00:33:57.283 [Pipeline] } 00:33:57.306 [Pipeline] // timeout 00:33:57.312 [Pipeline] } 00:33:57.333 [Pipeline] // catchError 00:33:57.339 [Pipeline] } 00:33:57.360 [Pipeline] // wrap 00:33:57.366 [Pipeline] } 00:33:57.385 [Pipeline] // catchError 00:33:57.395 [Pipeline] stage 00:33:57.399 [Pipeline] { (Epilogue) 00:33:57.416 [Pipeline] catchError 00:33:57.418 [Pipeline] { 00:33:57.436 [Pipeline] echo 00:33:57.438 Cleanup processes 00:33:57.447 [Pipeline] sh 00:33:57.742 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:57.742 952239 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:57.742 952683 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:57.758 [Pipeline] sh 00:33:58.044 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:58.044 ++ grep -v 'sudo pgrep' 00:33:58.044 ++ awk '{print $1}' 00:33:58.044 + sudo kill -9 952239 00:33:58.058 [Pipeline] sh 00:33:58.345 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:10.585 [Pipeline] sh 00:34:10.871 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:10.871 Artifacts sizes are good 00:34:10.889 [Pipeline] archiveArtifacts 00:34:10.897 Archiving artifacts 00:34:11.086 [Pipeline] sh 00:34:11.401 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:11.416 [Pipeline] cleanWs 00:34:11.427 [WS-CLEANUP] Deleting project workspace... 00:34:11.427 [WS-CLEANUP] Deferred wipeout is used... 00:34:11.435 [WS-CLEANUP] done 00:34:11.437 [Pipeline] } 00:34:11.457 [Pipeline] // catchError 00:34:11.471 [Pipeline] sh 00:34:11.755 + logger -p user.info -t JENKINS-CI 00:34:11.765 [Pipeline] } 00:34:11.781 [Pipeline] // stage 00:34:11.786 [Pipeline] } 00:34:11.801 [Pipeline] // node 00:34:11.806 [Pipeline] End of Pipeline 00:34:11.838 Finished: SUCCESS